JUnit Theories and Scala - scala

I'm looking for a way to test my Scala code with multiple inputs. Comming from Java/JUnit, I immediately thought of #RunWith(Theories.class).
Where I'm stuck is the usage of #DataPoints and the absence of static members/methods in Scala. So is there a way to write the following code in Scala?
#RunWith(classOf[Theories])
class ScalaTheory {
#DataPoints
val numbers = Array(1, 2, 3)
#Theory
def shouldMultiplyByTwo(number : Int) = {
// Given
val testObject = ObjectUnderTest
// When
val result = testObject.run(number)
// Then
assertTrue(result == number * 2)
}
}
I'm neither fixed on JUnit nor Theories so if there is something Scala-specific for this use case, I'm happy to use that.

To make this work, you have to do two things: use a method[see edit below], not a value, and secondly, define your #DataPoints in a companion object. The following should work:
object ScalaTheory {
#DataPoints
def numbers() = Array(1, 2, 3) // note def not val
}
#RunWith(classOf[Theories])
class ScalaTheory {
#Theory
def shouldMultiplyByTwo(number : Int) = {
// Given
val testObject = ObjectUnderTest
// When
val result = testObject.run(number)
// Then
assertTrue(result == number * 2)
}
}
When you define methods or fields in a companion object in Scala, you get a static forwarder in the class. Decompiling using JAD:
#Theories
public static final int[] numbers()
{
return ScalaTheory$.MODULE$.numbers();
}
So this takes care of the static problem. However, when we use a val numbers = ..., the annotation isn't carried over to the field, but it is for methods. So using def works.
As the others have said, if you're developing from scratch, it may be worth starting in a Scala framework like scalatest. The tool integration with scalatest is improving (i.e maven, Eclipse, Intellij), but it's not the level of JUnit, so evaluate it for your project before you start.
EDIT: In fact, after this discussion on scala-user, you can use val, but you need to tell the scala compiler to apply the DataPoints annotation to the static forwarder:
object ScalaTheory {
#(DataPoints #scala.annotation.target.getter)
val numbers = Array(1, 2, 3)
}
The getter annotation says that the #DataPoints annotation should be applied to the accessor method for the numbers field, that is the numbers() method which is created by the compiler. See package target.

I expect that what you want is possible with every Scala testing framework. I'm only familiar with Specs2, so here's my shot:
class DataPoints extends Specification {
val objectUnderTest: Int => Int = _ + 2
val testCases = 1 :: 2 :: 3 :: 4 :: Nil
def is: Fragments =
(objectUnderTest must multiplyByTwo((_: Int))).foreach(testCases)
def multiplyByTwo(i: Int): Matcher[(Int) => Int] =
(=== (i * 2)) ^^
((f: Int => Int) => f(i) aka "result of applying %s to %d".format(f, i))
}
And the output is:
result of applying <function1> to 1 '3' is not equal to '2'; result of applying <function1> to 3 '5' is not equal to '6'; result of applying <function1> to 4 '6' is not equal to '8'
Disclaimer
I do not state that this is very readable. Also I'm not an expert Specs2 user.

Just as ziggystar I can't really help you with your direct question. But I strongly recommend to switch to a scala testing framework. My personal favorite is scalatest.
In the last example here: http://blog.schauderhaft.de/2011/01/16/more-on-testing-with-scalatest/ I demonstrate how simple and straight forward it is to write test that runs with multiple inputs. It is just a simple loop!

Related

Scala test dependent methods used to calculate vals are executed only once

I am new to scala, and I'm trying figure out the best way to test the following process.
I have a class that gets a list of numbers from constructor parameter. The class supports various operations on the list, some operations may depend on the output of other operations. But every option should only perform calculations on demand and should be done at most once. No calculations should be done in the constructor.
Example class definition .
InputList: List[Int] .
x: returns a vector with the square of all elements in InputList .
y: returns the sum of all elements in x .
z: returns the square root of y .
As for class implementation, I think I was able to come up with a fitting solution but now I can't figure out how can I test the calculations of the dependent tree of operations are executed only once.
Class Implementation Approach #1:
class Operations(nums: List[Int]) {
lazy val x: List[Int] = nums.map(n => n*n)
lazy val y: Int = x.sum
lazy val z: Double = scala.math.sqrt(y)
}
This was my first approach which I'm confident will do the job but could not figure out how to properly test it so I decided to add some helper methods to confirm they are being called just ones
Class Implementation Approach #2:
class Ops(nums: List[Int]) {
def square(numbers: List[Int]): List[Int] = {
println("calling square function")
numbers.map(n => n*n)
}
def sum(numbers: List[Int]): Int = {
println("calling sum method")
numbers.sum
}
def sqrt(num: Int): Double = {
println("calling sqrt method")
scala.math.sqrt(num)
}
lazy val x: Vector[Double] = square(nums)
lazy val y: Double = sum(x)
lazy val z: Double = sqrt(y)
}
I can now confirm each dependent method of each method is called just once whenever necessary.
Now how can I write tests for these processes. I've seen a few posts about mockito and looked at the documentation but was not able to find what I was looking for. I looked at the following:
Shows how to test whether a function is called once but then how to test whether other depend functions where called?
http://www.scalatest.org/user_guide/testing_with_mock_objects#mockito
Mockito: How to verify a method was called only once with exact parameters ignoring calls to other methods?
Seems promising but I can't figure out the syntax:
https://github.com/mockito/mockito-scala
Example Tests I'd like to perform
var listoperations:Ops = new Ops(List(2,4,4))
listoperations.y // confirms 36 is return, confirms square and sum methods were called just once
listoperations.x // confirms List(4,16,16) and confirms square method was not called
listoperations.z // confirms 6 is returned and sqrt method called once and square and sum methods were not called.
Ok, lets leave the pre-mature optimisation argument for another time.
Mocks are meant to be used to stub/verify interactions with dependencies of your code (aka other classes), not to check internals of it, so in order to achieve what you want you'd need something like this
class Ops {
def square(numbers: List[Int]): List[Int] = numbers.map(n => n*n)
def sum(numbers: List[Int]): Int = numbers.sum
def sqrt(num: Int): Double = scala.math.sqrt(num)
}
class Operations(nums: List[Int])(implicit ops: Ops) {
lazy val x: List[Int] = ops.square(nums)
lazy val y: Int = ops.sum(x)
lazy val z: Double = ops.sqrt(y)
}
import org.mockito.{ ArgumentMatchersSugar, IdiomaticMockito}
class IdiomaticMockitoTest extends AnyWordSpec with IdiomaticMockito with ArgumentMatchersSugar
"operations" should {
"be memoised" in {
implicit val opsMock = spy(new Ops)
val testObj = new Operations(List(2, 4, 4))
testObj.x shouldBe List(4, 16, 16)
testObj.y shouldBe 36
testObj.y shouldBe 36 //call it again just for the sake of the argument
testObj.z shouldBe 6 //sqrt(36)
testObj.z shouldBe 6 //sqrt(36), call it again just for the sake of the argument
opsMock.sum(*) wasCalled once
opsMock.sqrt(*) wasCalled once
}
}
}
Hope it makes sense, you mentioned you're new to scala, so I didn't wanna go too crazy with implicits so this is a very basic example in which the API of your original Operations class is the same, but it extracts out the heavy lifting to a third party that can be mocked so you can verify the interactions.
As you mentioned the Mockito is the way to go, here is an example:
class NumberOPSTest extends FunSuite with Matchers with Mockito {
test("testSum") {
val listoperations = smartMock[NumberOPS]
when(listoperations.sum(any)).thenCallRealMethod()
listoperations.sum(List(2, 4, 4)) shouldEqual 10
verify(listoperations, never()).sqrt(any)
}
}

Fibonacci memoization in Scala with Memo.mutableHashMapMemo

I am trying implement the fibonacci function in Scala with memoization
One example given here uses a case statement:
Is there a generic way to memoize in Scala?
import scalaz.Memo
lazy val fib: Int => BigInt = Memo.mutableHashMapMemo {
case 0 => 0
case 1 => 1
case n => fib(n-2) + fib(n-1)
}
It seems the variable n is implicitly defined as the first argument, but I get a compilation error if I replace n with _
Also what advantage does the lazy keyword have here, as the function seems to work equally well with and without this keyword.
However I wanted to generalize this to a more generic function definition with appropriate typing
import scalaz.Memo
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
var value : Int = 0
if( n <= 1 ) { value = n; }
else { value = fibonachi(n-1) + fibonachi(n-2) }
return value
}
but I get the following compilation error
cmd10.sc:4: type mismatch;
found : Int => Int
required: Int
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
^Compilation Failed
Compilation Failed
So I am trying to understand the generic way of adding adding a memoization annotation to a scala def function
One way to achieve a Fibonacci sequence is via a recursive Stream.
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
An interesting aspect of streams is that, if something holds on to the head of the stream, the calculation results are auto-memoized. So, in this case, because the identifier fib is a val and not a def, the value of fib(n) is calculated only once and simply retrieved thereafter.
However, indexing a Stream is still a linear operation. If you want to memoize that away you could create a simple memo-wrapper.
def memo[A,R](f: A=>R): A=>R =
new collection.mutable.WeakHashMap[A,R] {
override def apply(a: A) = getOrElseUpdate(a,f(a))
}
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
val mfib = memo(fib)
mfib(99) //res0: BigInt = 218922995834555169026
The more general question I am trying to ask is how to take a pre-existing def function and add a mutable/immutable memoization annotation/wrapper to it inline.
Unfortunately there is no way to do this in Scala unless you are willing to use a macro annotation for this which feels like an overkill to me or to use some very ugly design.
The contradicting requirements are "def" and "inline". The fundamental reason for this is that whatever you do inline with the def can't create any new place to store the memoized values (unless you use a macro that can re-write code introducing new val/vars). You may try to work this around using some global cache but this IMHO falls under the "ugly design" branch.
The design of ScalaZ Memo is used to create a val of the type Function[K,V] which is often written in Scala as just K => V instead of def. In this way the produced val can contain also the storage for the cached values. On the other hand syntactically the difference between usage of a def method and of a K => V function is minimal so this works pretty well. Since the Scala compiler knows how to convert def method into a function, you can wrap a def with Memo but you can't get a def out of it. If for some reason you need def anyway, you'll need another wrapper def.
import scalaz.Memo
object Fib {
def fib(n: Int): BigInt = n match {
case 0 => BigInt(0)
case 1 => BigInt(1)
case _ => fib(n - 2) + fib(n - 1)
}
// "fib _" converts a method into a function. Sometimes "_" might be omitted
// and compiler can imply it but sometimes the compiler needs this explicit hint
lazy val fib_mem_val: Int => BigInt = Memo.mutableHashMapMemo(fib _)
def fib_mem_def(n: Int): BigInt = fib_mem_val(n)
}
println(Fib.fib(5))
println(Fib.fib_mem_val(5))
println(Fib.fib_mem_def(5))
Note how there is no difference in syntax of calling fib, fib_mem_val and fib_mem_def although fib_mem_val is a value. You may also try this example online
Note: beware that some ScalaZ Memo implementations are not thread-safe.
As for the lazy part, the benefit is typical for any lazy val: the actual value with the underlying storage will not be created until the first usage. If the method will be used anyway, I see no benefits in declaring it as lazy

Why do each new instance of case classes evaluate lazy vals again in Scala?

From what I have understood, scala treats val definitions as values.
So, any instance of a case class with same parameters should be equal.
But,
case class A(a: Int) {
lazy val k = {
println("k")
1
}
val a1 = A(5)
println(a1.k)
Output:
k
res1: Int = 1
println(a1.k)
Output:
res2: Int = 1
val a2 = A(5)
println(a1.k)
Output:
k
res3: Int = 1
I was expecting that for println(a2.k), it should not print k.
Since this is not the required behavior, how should I implement this so that for all instances of a case class with same parameters, it should only execute a lazy val definition only once. Do I need some memoization technique or Scala can handle this on its own?
I am very new to Scala and functional programming so please excuse me if you find the question trivial.
Assuming you're not overriding equals or doing something ill-advised like making the constructor args vars, it is the case that two case class instantiations with same constructor arguments will be equal. However, this does not mean that two case class instantiations with same constructor arguments will point to the same object in memory:
case class A(a: Int)
A(5) == A(5) // true, same as `A(5).equals(A(5))`
A(5) eq A(5) // false
If you want the constructor to always return the same object in memory, then you'll need to handle this yourself. Maybe use some sort of factory:
case class A private (a: Int) {
lazy val k = {
println("k")
1
}
}
object A {
private[this] val cache = collection.mutable.Map[Int, A]()
def build(a: Int) = {
cache.getOrElseUpdate(a, A(a))
}
}
val x = A.build(5)
x.k // prints k
val y = A.build(5)
y.k // doesn't print anything
x == y // true
x eq y // true
If, instead, you don't care about the constructor returning the same object, but you just care about the re-evaluation of k, you can just cache that part:
case class A(a: Int) {
lazy val k = A.kCache.getOrElseUpdate(a, {
println("k")
1
})
}
object A {
private[A] val kCache = collection.mutable.Map[Int, Int]()
}
A(5).k // prints k
A(5).k // doesn't print anything
The trivial answer is "this is what the language does according to the spec". That's the correct, but not very satisfying answer. It's more interesting why it does this.
It might be clearer that it has to do this with a different example:
case class A[B](b: B) {
lazy val k = {
println(b)
1
}
}
When you're constructing two A's, you can't know whether they are equal, because you haven't defined what it means for them to be equal (or what it means for B's to be equal). And you can't statically intitialize k either, as it depends on the passed in B.
If this has to print twice, it would be entirely intuitive if that would only be the case if k depends on b, but not if it doesn't depend on b.
When you ask
how should I implement this so that for all instances of a case class with same parameters, it should only execute a lazy val definition only once
that's a trickier question than it sounds. You make "the same parameters" sound like something that can be known at compile time without further information. It's not, you can only know it at runtime.
And if you only know that at runtime, that means you have to keep all past uses of the instance A[B] alive. This is a built in memory leak - no wonder Scala has no built-in way to do this.
If you really want this - and think long and hard about the memory leak - construct a Map[B, A[B]], and try to get a cached instance from that map, and if it doesn't exist, construct one and put it in the map.
I believe case classes only consider the arguments to their constructor (not any auxiliary constructor) to be part of their equality concept. Consider when you use a case class in a match statement, unapply only gives you access (by default) to the constructor parameters.
Consider anything in the body of case classes as "extra" or "side effect" stuffs. I consider it a good tactic to make case classes as near-empty as possible and put any custom logic in a companion object. Eg:
case class Foo(a:Int)
object Foo {
def apply(s: String) = Foo(s.toInt)
}
In addition to dhg answer, I should say, I'm not aware of functional language that does full constructor memoizing by default. You should understand that such memoizing means that all constructed instances should stick in memory, which is not always desirable.
Manual caching is not that hard, consider this simple code
import scala.collection.mutable
class Doubler private(a: Int) {
lazy val double = {
println("calculated")
a * 2
}
}
object Doubler{
val cache = mutable.WeakHashMap.empty[Int, Doubler]
def apply(a: Int): Doubler = cache.getOrElseUpdate(a, new Doubler(a))
}
Doubler(1).double //calculated
Doubler(5).double //calculated
Doubler(1).double //most probably not calculated

how do I use filter function in Scala script

As part of my learning, I am trying to write the Scala expression into a scala script but struck with an error.
The scala code I am having it successfully executed in Scala REPL is
def intList = List[1,2,3,4,5]
intList.filter(x => x%2 ==1).map(x => x * x).reduce((x,y) => x+y)
This successfully gets executed and the following is the result I get.
scala> intList.filter(x => x % 2 == 1).map(x => x * x).reduce((x,y) => x + y)
res15: Int = 35
I am trying to make this as a Scala script or class so as to rerun any number of times on demand. I save this in a file named SumOfSquaresOfOdd.scala
import scala.collection.immutable.List
object SumOfSquaresOfOdd extends App
{
def main(args:Array[String]):Unit =
{
var intList = List[Integer](1,2,3,4,5,6,7,8,9,10)
def sum = intList.filter(x => x % 2 ==1).map(x => x * x).reduce((x+y) => x + y)
println sum
}
}
When I compile this using scalac, the following error is printed on the console.
λ scalac SumOfSquaresOfOdd.scala
SumOfSquaresOfOdd.scala:8: error: not a legal formal parameter.
Note: Tuples cannot be directly destructured in method or function parameters.
Either create a single parameter accepting the Tuple1,
or consider a pattern matching anonymous function: `{ case (param1, param1) => ... }
def sum = intList.reduce(x => x % 2 ==1).map(x => x * x).reduce((x+y) => x + y)
^
one error found
How do I use the filter, map, reduce methods in a script? Appreciate your help and support.
UPDATE: Typos updated in the code.
I can answer your question:
How do I use the filter, map, reduce methods in a script?
But I can't fully solve your specific use case because you didn't specify what the script should be doing.
Try this code
object SumOfSquaresOfOdd {
def main(args: Array[String]) : Unit = {
var intList = List(1,2,3,4,5,6,7,8,9,10)
def sum = intList.filter(x => x % 2 ==1).map(x => x * x)
println(sum)
}
}
Then
~/Code/stack-overflow $ scalac SumOfSquaresOfOdd.scala
~/Code/stack-overflow $ scala SumOfSquaresOfOdd
List(1, 9, 25, 49, 81)
You seem to be a little lost. Here are some tips:
Use Int rather than Integer; Int is Scala's integer type. And you don't need to import it.
Don't extend App in this case. Refer to this question Difference between using App trait and main method in scala
Use wrapping parens for println
A literal List(1,2,3) will be type-inferenced to List[Int]; no need to explicitly type it. Check in the Scala REPL.
I think you confused reduce with filter. Compare both in the latest scaladoc: http://www.scala-lang.org/api/current/#scala.collection.immutable.List
Other ways to run scala code: http://joelabrahamsson.com/learning-scala-part-three-executing-scala-code/
Highly recommend you do Functional Programming Principles in Scala if you're serious about learning.
Good luck learning Scala! :)

Unable to override implicits in scala

I am trying to learn scalaz and still new to scala (been using for a few months now). I really like the type-classes scalaz provides and trying to document the different use-cases for different features in scalaz. Right now I am having issues with the way implicts work and the way I want to do things.
Here is the code I would like to work
import scalaz._
import Scalaz._
object WhatIfIWantfFuzzyMatching extends App {
implicit object evensEquals extends Equal[Int] {
override def equal(left: Int, right: Int): Boolean = {
val leftMod = left % 2
val rightMod = right % 2
leftMod == rightMod
}
}
val even = 2
val odd = 3
assert(even =/= odd, "Shouldn't have matched!")
val evenMultTwo = even * 2
assert(even === evenMultTwo, "Both are even, so should have matched")
}
The idea is simple. For some parts of the code, I want the provided Equal[Int] from scalaz. In other parts of the code, I would like to override the Equal[Int] provided with a less strict one.
Right now I am hitting an issue where scala isn't able to figure out which implicit to use:
ambiguous implicit values:
both object evensEquals in object WhatIfIWantfFuzzyMatching of type com.gopivotal.scalaz_examples.equal.WhatIfIWantfFuzzyMatching.evensEquals.type
and value intInstance in trait AnyValInstances of type => scalaz.Monoid[Int] with scalaz.Enum[Int] with scalaz.Show[Int]
match expected type scalaz.Equal[Int]
assert(even =/= odd, "Shouldn't have matched!")
^
Looking at other threads on here, I see people say to just change the type so there isn't a conflict, or to only import when needed, but in the case of scalaz's === and mixing and matching different equals methods, I am not sure how to get that to work with the compiler.
Any thoughts?
EDIT:
Here is a working example that lets you switch between implementations (thanks #alexey-romanov)
object WhatIfIWantToSwitchBack extends App {
// so what if I want to switch back to the other Equals?
object modEqualsInt extends Equal[Int] {
override def equal(left: Int, right: Int): Boolean = {
val leftMod = left % 2
val rightMod = right % 2
leftMod == rightMod
}
}
implicit var intInstance: Equal[Int] = Scalaz.intInstance
assert(2 =/= 4)
intInstance = modEqualsInt
assert(2 === 4)
intInstance = Scalaz.intInstance
assert(2 =/= 4)
}
You can just use the same name intInstance (but this means you'll lose the Monoid, Enum and Show instances).