Scala test eventually configuration for all implementations? - scala

I have many many tests that runs with asynchronous code in scala, I am working with scala concurrent eventually. This helps me to know when given a certain asynchronous event happens I can expect certain values.
But eventually has a timeout that can be overwritten for each test so it does not crash expecting for a value that is not there yet...
I would like to change this timeout for all the implementations of eventually with one configuration change for all the tests.
In terms of code I do the following:
class AsyncCharacterTest extends WordSpec
with Matchers
with Eventually
with BeforeAndAfterAll
with ResponseAssertions {
"Should provide the character from that where just created" should {
val character = Character(name = "juan", age = 32)
service.createIt(character)
eventually(timeout(2.seconds)){
responseAs[Character] should be character
}
}
}
I would like not to have to write this timeout(2.seconds) for each test... I would like to have a configuration for all the tests with the chance to override this timeout for specific cases.
Is this possible with scala concurrent eventually? this would help me to write more DRY code.
Doing something like
Eventually.default.timeout = 2.seconds
And then this is working for all the tests at the same time with 2 seconds by default.

Essentially, what you currently do by eventually(timeout(...))) is provide an explicit value to an implicit parameter.
One simple way to achieve what you want would be to do the following:
Remove all the explicit timeout() from `eventually call.
Create a trait to contain a desired default value for the timeout as an implicit value:
trait EventuallyTimeout {
implicit val patienceConfig: PatienceConfig = PatienceConfig(timeout = ..., interval = ...)
}
Mix this trait in into all of your tests:
class AsyncCharacterTest extends WordSpec extends EventuallyTimeout extends ...
Full example:
// likely in a different file
trait EventuallyTimeout {
implicit val patienceConfig: PatienceConfig = PatienceConfig(timeout = ..., interval = ...)
}
class AsyncCharacterTest extends WordSpec
with Matchers
with Eventually
with BeforeAndAfterAll
with ResponseAssertions
with EventuallyTimeout {
"Should provide the character from that where just created" should {
val character = Character(name = "juan", age = 32)
service.createIt(character)
eventually {
responseAs[Character] should be character
}
}
}
For more details, refer to Eventually docs and implicit.
Finallly, on a side note, eventually is mostly intended for integration testing. You might want to consider using different mechanisms, such as:
ScalaFutures trait + whenReady method - similar to eventually approach.
Async* spec counterparts (ie. AsyncFunSpec, AsyncWordSpec, etc.).

Related

Scala Testing of Trait Applied to Implementing Sub-Classes

Given a scala trait with implementing sub-classes
trait Foo {
def doesStuff() : Int
}
case class Bar() extends Foo { ... }
case class Baz() extends Foo { ... }
How do the unit tests get organized in order to test the trait and then apply the tests to each of the implementations?
I'm looking for something of the form:
class FooSpec(foo : Foo) extends FlatSpec {
"A Foo" should {
"do stuff" in {
assert(foo.doesStuff == 42)
}
}
}
Which would then be applied to each of the implementing classes:
FooSpec(Bar())
FooSpec(Baz())
If the implementations of Bar.doesStuff and Baz.doesStuff have different behavior, having two separate tests is the appropriate solution.
import org.scalatest.FlatSpec
class FooSpec1 extends FlatSpec {
"a Bar" should "do a bar thing" in {
Bar().doesStuff() == 42
}
"a Baz" should "do a baz thing" in {
Baz().doesStuff() % 2 == 0
}
}
However, if they have the same behavior, you can refactor the tests with a function to avoid duplicate code. I don't believe scalatest can achieve this reuse pattern at the spec level like you're asking for.
import org.scalatest.FlatSpec
class FooSpec2 extends FlatSpec {
def checkDoesStuff(foo: Foo): Boolean =
foo.doesStuff() == 42
"a Bar" should "do a bar thing" in {
checkDoesStuff(Bar())
}
"a Baz" should "do a baz thing" in {
checkDoesStuff(Baz())
}
}
Property-based testing can do exactly what you're looking for though. Here's an example using scalacheck:
import org.scalacheck.{Gen, Properties}
import org.scalacheck.Prop.forAll
object FooProperties extends Properties("Foo"){
val fooGen: Gen[Foo] = Gen.pick(1, List(Bar(), Baz())).map(_.head)
property("a Foo always does stuff") = forAll(fooGen){
(foo: Foo) => foo.doesStuff() == 42
}
}
Unlike ScalaTest specs, properties are always functions. The forAll function takes a generator, samples values of the generator and runs the test on all samples. Our generator will always return either an instance of Bar or Baz which means the property will cover all the cases you're looking to test. forAll asserts that if a single test fails the entire property fails.
Whilst the accepted answer definitely works, scalacheck runs 100 unit tests by default, and is typically used to tackle a different issue than the question is suggesting.
Perhaps this wasn't available in 2019, but I would like to contribute an answer that as of ScalaTest 3.2.9 (and perhaps earlier) is more applicable.
You can create a trait to house all the re-usable tests and then extend that into your unit tests. I will give a brief example below using the question classes. The full example is provided at this link under "Shared tests".
trait FooBehaviours { this: AnyFlatSpec =>
/*
the defined tests here should consider how your trait behaves in certain conditions
*/
def emptyFoo(foo: => Foo): Unit = {
it should "do stuff" in {
assert(foo.doesStuff == 42)
}
}
def fullFoo(foo: => Foo): Unit = {
it should "do stuff" in {
assert(foo.doesStuff == 420)
}
}
}
class FooSpec extends AnyFlatSpec with FooBehaviours {
//these methods are defs but will be re-instantiated in the behaviour tests due to the call-by-name ": =>" parameters
def bar = Bar()
def baz = Baz()
behavior of "A bar"
it should behave like emptyFoo(bar)
it should behave like fullFoo(bar)
behavior of "A baz"
it should behave like emptyFoo(baz)
it should behave like fullFoo(baz)
}
Excerpts from the link in case it doesn't work
Sometimes you may want to run the same test code on different fixture
objects. In other words, you may want to write tests that are "shared"
by different fixture objects. To accomplish this in a AnyFlatSpec, you
first place shared tests in behavior functions. These behavior
functions will be invoked during the construction phase of any
AnyFlatSpec that uses them, so that the tests they contain will be
registered as tests in that AnyFlatSpec. For example, given this stack
class:
...
You may want to test the Stack class in different states:
empty, full, with one item, with one item less than capacity, etc. You
may find you have several tests that make sense any time the stack is
non-empty. Thus you'd ideally want to run those same tests for three
stack fixture objects: a full stack, a stack with a one item, and a
stack with one item less than capacity. With shared tests, you can
factor these tests out into a behavior function, into which you pass
the stack fixture to use when running the tests. So in your
AnyFlatSpec for stack, you'd invoke the behavior function three times,
passing in each of the three stack fixtures so that the shared tests
are run for all three fixtures. You can define a behavior function
that encapsulates these shared tests inside the AnyFlatSpec that uses
them. If they are shared between different AnyFlatSpecs, however, you
could also define them in a separate trait that is mixed into each
AnyFlatSpec that uses them.

idiomatic way to block async operations

I was looking at some open source scala project. and I saw that some are doing something like :
abstract class Foo{
def create(implicit ex: ExecutionContextExecutor): Seq[ResultSet] = {
Await.result(createAsync(), timeout)
}
def createAsync()(implicit ex: ExecutionContextExecutor): Future[Seq[ResultSet]] = //implementation
... more like those
}
is there any advantage/disadvantage for calling each method with
(implicit ex: ExecutionContextExecutor) parameter rather than passing the ExecutionContextExecutor in the class constructor:
abstract class Foo(implicit ex: ExecutionContextExecutor){
def create(timeout: FiniteDuration): Seq[ResultSet] = {
Await.result(createAsync(), timeout)
}
def createAsync(): Future[Seq[ResultSet]] = //implementation
... more like those
}
is there preferred option ?
The former approach gives you more flexibility as to where the execution of createAsync will be scheduled, since each time you can make a decision as to which ExecutionContext you want to pass in. Question is, do you need that flexibility? I find that most of the times a single ExecutionContext is sufficient, but it is really a matter of case by case analysis.
In general, the first snippet is horrible IMO. Exposing a synchronous wrapper which blocks around an asynchronous operation is usually a sign of a code smell, and code like that doesn't scale well.

Little confused on the usefulness of beforeAll construct in ScalaTest

I have more of a philosophical confusion in-regards to the usefulness of methods like 'beforeAll' in scalaTest.
I have been looking for an answer why the need to even have constructs like beforeAll? I do understand that there is a reason why this design decision was taken but not able to think it through. Can anyone help?
e.g.
Suggested way as per tutorials online,
class TestExample extends FunSuite with BeforeAndAfterAll {
private var _tempDir: File = _
protected def tempDir: File = _tempDir
override def beforeAll(): Unit = {
super.beforeAll()
_tempDir = Utils.createTempDir(namePrefix = this.getClass.getName)
}
test("...") {
// using the variable in the function
}
}
vs
class TestExample extends FunSuite with BeforeAndAfterAll {
private val tempDir: File = Utils.createTempDir(namePrefix =
this.getClass.getName)
}
test("...") {
// Use the initialized variable here.
}
If you have cleanup to do in afterAll, I think it is symmetric to do setup in beforeAll. Also if you need to do some side effect that doesn't involve initializing instance variables, that can go in beforeAll. In the example you gave, though, where you don't have any cleanup to do in afterAll and all you're doing before all tests is initializing instance variables, I'd do with plain old initialization.
One other difference between val initializers and beforeAll is val initializers happen when the class is instantiated, whereas beforeAll happens later, when the instance is executed. If you want to delay the initialization until the class is run, you can use lazy vals.
One point worth noting is that some runners (such as the ScalaTest ant task, and Intellij IDEA), will instantiate all tests instances before running any tests. If your setup code happens to interact with any global variables or external state, then you probably want to defer those interactions until the test is run.
As a simple (contrived) example, suppose your code under test includes
Object Singleton {
var foo = ""
}
and you have two test classes:
class Test1 extends FunSuite {
Singleton.foo = "test1"
test("...") {
Singleton.foo should be("test1")
}
}
class Test1 extends FunSuite {
Singleton.foo = "test2"
test("...") {
Singleton.foo should be("test2")
}
}
If both classes are instantiated before any tests are run, then at least one of your two tests will fail. Conversely, if you defer your initialize work until beforeAll, you'll not see the same interference between tests.

Can you dynamically generate Test names for ScalaTest from input data?

I have a number of test data sets that run through the same ScalaTest unit tests. I'd love if each test data set was it's own set of named tests so if one data set fails one of the tests i know exactly which one it was rather than going to a single test and looking on what file it failed. I just can't seem to find a way for the test name to be generated at runtime. I've looked at property and table based testing and currently am using should behave like to share fixtures, but none of these seem to do what I want.
Have I not uncovered the right testing approach in ScalaTest or is this not possible?
You can write dynamic test cases with ScalaTest like Jonathan Chow wrote in his blog here: http://blog.echo.sh/2013/05/12/dynamically-creating-tests-with-scalatest.html
However, I always prefer the WordSpec testing definitions and this also works with dynamic test cases just like Jonathan mentions.
class MyTest extends WordSpec with Matchers {
"My test" should {
Seq(1,2,3) foreach { count =>
s"run test $count" in {
count should be(count)
}
}
}
}
When running this test it run 3 test cases
TestResults
MyTest
My test
run test 1
run test 2
run test 3
ps. You can even do multiple test cases in the same foreach function using the same count variable.
You could write a base test class, and extend it for each data set. Something like this:
case class Person(name: String, age: Int)
abstract class MyTestBase extends WordSpec with Matchers {
def name: String
def dataSet: List[Person]
s"Data set $name" should {
"have no zero-length names" in {
dataSet.foreach { s => s.name should not be empty }
}
}
}
class TheTest extends MyTestBase {
override lazy val name = "Family" // note lazy, otherwise initialization fails
override val dataSet = List(Person("Mom", 53), Person("Dad", 50))
}
Which produces output like this:
TheTests:
Data set Family
- should have no zero-length names
You can use scala string substitutions in your test names. Using behavior functions, something like this would work:
case class Person(name: String, age: Int)
trait PersonBehaviors { this: FlatSpec =>
// or add data set name as a parameter to this function
def personBehavior(person: => Person): Unit = {
behavior of person.name
it should s"have non-negative age: ${person.age}" in {
assert(person.age >= 0)
}
}
}
class TheTest extends FlatSpec with PersonBehaviors {
val person = Person("John", 32)
personBehavior(person)
}
This produces output like this:
TheTest:
John
- should have non-negative age: 32
What about using ScalaTest's clue mechanism so that any test failures can report as a clue which data set was being used?
You can use the withClue construct provided by Assertions,
which is extended by every style trait in ScalaTest, to add
extra information to reports of failed or canceled tests.
See also the documentation on AppendedClues

How to do setup/teardown in specs2 when using "in new WithApplication"

I am using Specs2 with play 2.2.1 built with Scala 2.10.2 (running Java 1.7.0_51). I have been reading about how to do setup/teardown with Specs2. I have seen examples using the "After" trait as follows:
class Specs2Play extends org.specs2.mutable.Specification {
"this is the first example" in new SetupAndTeardownPasswordAccount {
println("testing")
}
}
trait SetupAndTeardownPasswordAccount extends org.specs2.mutable.After {
println("setup")
def after = println("teardown ")
}
This works fine, except that all of my tests are using "in new WithApplication". It seems what I need is to have an object which is both a "WithApplication" and an "After". Below does not compile, but is essentially what I want:
trait SetupAndTeardownPasswordAccount extends org.specs2.mutable.After with WithApplication
So, my question is, how do I add setup/teardown to my tests which are already using "in WithApplication"? My primary concern is that all of our tests make use of fake routing like this (so they need the With Application).
val aFakeRequest = FakeRequest(method, url).withHeaders(headers).withBody(jsonBody)
val Some(result) = play.api.test.Helpers.route(aFakeRequest)
result
This is the code for WithApplication:
abstract class WithApplication(val app: FakeApplication = FakeApplication()) extends Around with Scope {
implicit def implicitApp = app
override def around[T: AsResult](t: => T): Result = {
Helpers.running(app)(AsResult.effectively(t))
}
}
It's actually quite easy to modify this to suit your needs without creating a bunch of other traits. The missing piece here is the anonymous function t, which you provide the implementation for in your tests (using WithApplication). It would be nice to make WithApplication a little more robust to be able to execute arbitrary blocks of code before and after the tests, if necessary.
One approach could be to create a similar class to WithApplication that accepts two anonymous functions setup and teardown that both return Unit. All I really need to do is modify what's happening inside AsResult.effectively(t). To keep this simple, I'm going to remove the app parameter from the parameter list, and use FakeApplication always. You don't seem to be providing a different configuration, and it can always be added back.
abstract class WithEnv(setup: => Unit, teardown: => Unit) extends Around with Scope {
implicit def implicitApp = app
override def around[T: AsResult](t: => T): Result = {
Helpers.running(app)(AsResult.effectively{
setup
try {
t
} finally {
teardown
}
})
}
}
Instead of simply calling the anonymous function t, I first call setup, then t, then teardown. The try/finally block is important because failed tests in specs2 throw exceptions, and we want to be sure that teardown will be executed no matter what the outcome.
Now you can easily setup test environments using functions.
import java.nio.files.{Files, Paths}
def createFolder: Unit = Files.createDirectories(Paths.get("temp/test"))
def deleteFolder: Unit = Files.delete("temp/test")
"check if a file exists" in new WithEnv(createFolder, deleteFolder) {
Files.exists(Paths.get("temp/test")) must beTrue
}
(This might not compile, but you get the idea.)
If your after method doesn't need anything from the WithApplication trait you can mix in your specification the AfterExample trait and define the after behaviour for the whole spec:
import org.specs2.specification._
class Specs2Play extends org.specs2.mutable.Specification with AfterExample {
"this is the first example" in new SetupAndTeardownPasswordAccount {
pending("testing")
}
trait SetupAndTeardownPasswordAccount extends WithApplication
def after = println("cleanup")
}