Can you dynamically generate Test names for ScalaTest from input data? - scala

I have a number of test data sets that run through the same ScalaTest unit tests. I'd love if each test data set was it's own set of named tests so if one data set fails one of the tests i know exactly which one it was rather than going to a single test and looking on what file it failed. I just can't seem to find a way for the test name to be generated at runtime. I've looked at property and table based testing and currently am using should behave like to share fixtures, but none of these seem to do what I want.
Have I not uncovered the right testing approach in ScalaTest or is this not possible?

You can write dynamic test cases with ScalaTest like Jonathan Chow wrote in his blog here: http://blog.echo.sh/2013/05/12/dynamically-creating-tests-with-scalatest.html
However, I always prefer the WordSpec testing definitions and this also works with dynamic test cases just like Jonathan mentions.
class MyTest extends WordSpec with Matchers {
"My test" should {
Seq(1,2,3) foreach { count =>
s"run test $count" in {
count should be(count)
}
}
}
}
When running this test it run 3 test cases
TestResults
MyTest
My test
run test 1
run test 2
run test 3
ps. You can even do multiple test cases in the same foreach function using the same count variable.

You could write a base test class, and extend it for each data set. Something like this:
case class Person(name: String, age: Int)
abstract class MyTestBase extends WordSpec with Matchers {
def name: String
def dataSet: List[Person]
s"Data set $name" should {
"have no zero-length names" in {
dataSet.foreach { s => s.name should not be empty }
}
}
}
class TheTest extends MyTestBase {
override lazy val name = "Family" // note lazy, otherwise initialization fails
override val dataSet = List(Person("Mom", 53), Person("Dad", 50))
}
Which produces output like this:
TheTests:
Data set Family
- should have no zero-length names

You can use scala string substitutions in your test names. Using behavior functions, something like this would work:
case class Person(name: String, age: Int)
trait PersonBehaviors { this: FlatSpec =>
// or add data set name as a parameter to this function
def personBehavior(person: => Person): Unit = {
behavior of person.name
it should s"have non-negative age: ${person.age}" in {
assert(person.age >= 0)
}
}
}
class TheTest extends FlatSpec with PersonBehaviors {
val person = Person("John", 32)
personBehavior(person)
}
This produces output like this:
TheTest:
John
- should have non-negative age: 32

What about using ScalaTest's clue mechanism so that any test failures can report as a clue which data set was being used?
You can use the withClue construct provided by Assertions,
which is extended by every style trait in ScalaTest, to add
extra information to reports of failed or canceled tests.
See also the documentation on AppendedClues

Related

How can I get list of all the tests in a test class implemented using ScalaTest?

Like we have a test report containing information like
<testcase classname="ABC" name="should fulfill some condition" time="12.22">
How can I get this information in another test class? For example, Suppose we have a Test Class One which has some tests implemented using Scala Test. How can get the list of all tests with description implemented in Class One in another test class Two?
You can call testNames to get a set of all tests.
scala> class FooSpec extends AnyFlatSpec {
| "foo" should "do thing" in { }
| it should "do something else" in { }
| }
class FooSpec
scala> new FooSpec().testNames
val res1: Set[String] = TreeSet(foo should do thing, foo should do something else)

Scala Testing of Trait Applied to Implementing Sub-Classes

Given a scala trait with implementing sub-classes
trait Foo {
def doesStuff() : Int
}
case class Bar() extends Foo { ... }
case class Baz() extends Foo { ... }
How do the unit tests get organized in order to test the trait and then apply the tests to each of the implementations?
I'm looking for something of the form:
class FooSpec(foo : Foo) extends FlatSpec {
"A Foo" should {
"do stuff" in {
assert(foo.doesStuff == 42)
}
}
}
Which would then be applied to each of the implementing classes:
FooSpec(Bar())
FooSpec(Baz())
If the implementations of Bar.doesStuff and Baz.doesStuff have different behavior, having two separate tests is the appropriate solution.
import org.scalatest.FlatSpec
class FooSpec1 extends FlatSpec {
"a Bar" should "do a bar thing" in {
Bar().doesStuff() == 42
}
"a Baz" should "do a baz thing" in {
Baz().doesStuff() % 2 == 0
}
}
However, if they have the same behavior, you can refactor the tests with a function to avoid duplicate code. I don't believe scalatest can achieve this reuse pattern at the spec level like you're asking for.
import org.scalatest.FlatSpec
class FooSpec2 extends FlatSpec {
def checkDoesStuff(foo: Foo): Boolean =
foo.doesStuff() == 42
"a Bar" should "do a bar thing" in {
checkDoesStuff(Bar())
}
"a Baz" should "do a baz thing" in {
checkDoesStuff(Baz())
}
}
Property-based testing can do exactly what you're looking for though. Here's an example using scalacheck:
import org.scalacheck.{Gen, Properties}
import org.scalacheck.Prop.forAll
object FooProperties extends Properties("Foo"){
val fooGen: Gen[Foo] = Gen.pick(1, List(Bar(), Baz())).map(_.head)
property("a Foo always does stuff") = forAll(fooGen){
(foo: Foo) => foo.doesStuff() == 42
}
}
Unlike ScalaTest specs, properties are always functions. The forAll function takes a generator, samples values of the generator and runs the test on all samples. Our generator will always return either an instance of Bar or Baz which means the property will cover all the cases you're looking to test. forAll asserts that if a single test fails the entire property fails.
Whilst the accepted answer definitely works, scalacheck runs 100 unit tests by default, and is typically used to tackle a different issue than the question is suggesting.
Perhaps this wasn't available in 2019, but I would like to contribute an answer that as of ScalaTest 3.2.9 (and perhaps earlier) is more applicable.
You can create a trait to house all the re-usable tests and then extend that into your unit tests. I will give a brief example below using the question classes. The full example is provided at this link under "Shared tests".
trait FooBehaviours { this: AnyFlatSpec =>
/*
the defined tests here should consider how your trait behaves in certain conditions
*/
def emptyFoo(foo: => Foo): Unit = {
it should "do stuff" in {
assert(foo.doesStuff == 42)
}
}
def fullFoo(foo: => Foo): Unit = {
it should "do stuff" in {
assert(foo.doesStuff == 420)
}
}
}
class FooSpec extends AnyFlatSpec with FooBehaviours {
//these methods are defs but will be re-instantiated in the behaviour tests due to the call-by-name ": =>" parameters
def bar = Bar()
def baz = Baz()
behavior of "A bar"
it should behave like emptyFoo(bar)
it should behave like fullFoo(bar)
behavior of "A baz"
it should behave like emptyFoo(baz)
it should behave like fullFoo(baz)
}
Excerpts from the link in case it doesn't work
Sometimes you may want to run the same test code on different fixture
objects. In other words, you may want to write tests that are "shared"
by different fixture objects. To accomplish this in a AnyFlatSpec, you
first place shared tests in behavior functions. These behavior
functions will be invoked during the construction phase of any
AnyFlatSpec that uses them, so that the tests they contain will be
registered as tests in that AnyFlatSpec. For example, given this stack
class:
...
You may want to test the Stack class in different states:
empty, full, with one item, with one item less than capacity, etc. You
may find you have several tests that make sense any time the stack is
non-empty. Thus you'd ideally want to run those same tests for three
stack fixture objects: a full stack, a stack with a one item, and a
stack with one item less than capacity. With shared tests, you can
factor these tests out into a behavior function, into which you pass
the stack fixture to use when running the tests. So in your
AnyFlatSpec for stack, you'd invoke the behavior function three times,
passing in each of the three stack fixtures so that the shared tests
are run for all three fixtures. You can define a behavior function
that encapsulates these shared tests inside the AnyFlatSpec that uses
them. If they are shared between different AnyFlatSpecs, however, you
could also define them in a separate trait that is mixed into each
AnyFlatSpec that uses them.

Scala test eventually configuration for all implementations?

I have many many tests that runs with asynchronous code in scala, I am working with scala concurrent eventually. This helps me to know when given a certain asynchronous event happens I can expect certain values.
But eventually has a timeout that can be overwritten for each test so it does not crash expecting for a value that is not there yet...
I would like to change this timeout for all the implementations of eventually with one configuration change for all the tests.
In terms of code I do the following:
class AsyncCharacterTest extends WordSpec
with Matchers
with Eventually
with BeforeAndAfterAll
with ResponseAssertions {
"Should provide the character from that where just created" should {
val character = Character(name = "juan", age = 32)
service.createIt(character)
eventually(timeout(2.seconds)){
responseAs[Character] should be character
}
}
}
I would like not to have to write this timeout(2.seconds) for each test... I would like to have a configuration for all the tests with the chance to override this timeout for specific cases.
Is this possible with scala concurrent eventually? this would help me to write more DRY code.
Doing something like
Eventually.default.timeout = 2.seconds
And then this is working for all the tests at the same time with 2 seconds by default.
Essentially, what you currently do by eventually(timeout(...))) is provide an explicit value to an implicit parameter.
One simple way to achieve what you want would be to do the following:
Remove all the explicit timeout() from `eventually call.
Create a trait to contain a desired default value for the timeout as an implicit value:
trait EventuallyTimeout {
implicit val patienceConfig: PatienceConfig = PatienceConfig(timeout = ..., interval = ...)
}
Mix this trait in into all of your tests:
class AsyncCharacterTest extends WordSpec extends EventuallyTimeout extends ...
Full example:
// likely in a different file
trait EventuallyTimeout {
implicit val patienceConfig: PatienceConfig = PatienceConfig(timeout = ..., interval = ...)
}
class AsyncCharacterTest extends WordSpec
with Matchers
with Eventually
with BeforeAndAfterAll
with ResponseAssertions
with EventuallyTimeout {
"Should provide the character from that where just created" should {
val character = Character(name = "juan", age = 32)
service.createIt(character)
eventually {
responseAs[Character] should be character
}
}
}
For more details, refer to Eventually docs and implicit.
Finallly, on a side note, eventually is mostly intended for integration testing. You might want to consider using different mechanisms, such as:
ScalaFutures trait + whenReady method - similar to eventually approach.
Async* spec counterparts (ie. AsyncFunSpec, AsyncWordSpec, etc.).

Is it possible to mock / override dependencies / imports in Scala?

I have some code looking like this:
package org.samidarko.actors
import org.samidarko.helpers.Lib
class Monitoring extends Actor {
override def receive: Receive = {
case Tick =>
Lib.sendNotification()
}
}
Is there a way to mock/stub Lib from ScalaTest like with proxyquire for nodejs?
I read that I could use dependency injection but I would rather not do that
Is my only alternative is to pass my lib as class parameter?
class Monitoring(lib: Lib) extends Actor {
Any advice to make it more testable? Thanks
EDIT:
Xavier Guihot's answer is an interesting approach of the problem but I choose to change the code for testing purpose.
I'm passing the Lib as parameter and I'm mocking with mockito, it makes the code easier to test and to maintain than shadowing the scope.
This answer only uses scalatest and doesn't impact the source code:
Basic solution:
Let's say you have this src class (the one you want to test and for which you want to mock the dependency):
package com.my.code
import com.lib.LibHelper
class MyClass() {
def myFunction(): String = LibHelper.help()
}
and this library dependency (which you want to mock / override when testing MyClass):
package com.lib
object LibHelper {
def help(): String = "hello world"
}
The idea is to create a class in your test folder which will override/shadow the library. The class will have the same name and the same package as the one you want to mock. In src/test/scala/com/external/lib, you can create LibHelper.scala which contains this code:
package com.lib
object LibHelper {
def help(): String = "hello world - overriden"
}
And this way you can test your code the usual way:
package com.my.code
import org.scalatest.FunSuite
class MyClassTest extends FunSuite {
test("my_test") {
assert(new MyClass().myFunction() === "hello world - overriden")
}
}
Improved solution which allows setting the behavior of the mock for each test:
Previous code is clear and simple but the mocked behavior of LibHelper is the same for all tests. And one might want to have a method of LibHelper produce different outputs. We can thus consider setting a mutable variable in the LibHelper and updating the variable before each test in order to set the desired behavior of LibHelper. (This only works if LibHelper is an object)
The shadowing LibHelper (the one in src/test/scala/com/external/lib) should be replaced with:
package com.lib
object LibHelper {
var testName = "test_1"
def help(): String =
testName match {
case "test_1" => "hello world - overriden - test 1"
case "test_2" => "hello world - overriden - test 2"
}
}
And the scalatest class should become:
package com.my.code
import com.lib.LibHelper
import org.scalatest.FunSuite
class MyClassTest extends FunSuite {
test("test_1") {
LibHelper.testName = "test_1"
assert(new MyClass().myFunction() === "hello world - overriden - test 1")
}
test("test_2") {
LibHelper.testName = "test_2"
assert(new MyClass().myFunction() === "hello world - overriden - test 2")
}
}
Very important precision, since we're using a global variable, it is compulsory to force scalatest to run test in sequence (not in parallel). The associated scalatest option (to be included in build.sbt) is:
parallelExecution in Test := false
Not a complete answer (as I don't know AOP very well), but to put you in the right direction, this is possible through Java lib called AspectJ:
https://blog.jayway.com/2007/02/16/static-mock-using-aspectj/
https://www.cakesolutions.net/teamblogs/2013/08/07/aspectj-with-akka-scala
Example in pseudocode (without going into details):
class Mock extends MockAspect {
#Pointcut("execution (* org.samidarko.helpers.Lib.sendNotification(..))")
def intercept() {...}
}
The low level basics of this approach are Dynamic Proxies: https://dzone.com/articles/java-dynamic-proxy. However, you can mock static methods too (maybe you'll have to add word static into the pattern).

Programmatically adding new tests via ScalaTest

I have a set of input cases stored in a file.
I would like each case to be a specific scalatest "test", i.e., reported in the console as an individual test and failed individually.
Unfortunately, experimentation and Google suggest that this capability might not be present?
E.g., this seems to be the common case (eliding for simplicity)
class MyTestingGoop extends FunSuite {
val input : Seq[SpecificTestCase] = ...
test("input data test") {
forAll(input) { case => ... }
}
//...
}
Ideally, each case presents as a separate test. How can this be done with ScalaTest?
You can do this:
class MyTestingGoop extends FunSuite {
val input : Seq[SpecificTestCase] = ...
forAll(input) {
test("testing input" + input) {
// do something with the test
}
}
}
The only limit is that input has a unique toString.
Basically calling test in Funsuite registers the test and later runs it so as long as your test creation is done as part of the class construction and each test has a unique string, you should be fine.