Parallel execution of tests - scala

I've noticed that SBT is running my specs2 tests in parallel. This seems good, except one of my tests involves reading and writing from a file and hence fails unpredictably, e.g. see below.
Are there any better options than
setting all tests to run in serial,
using separate file names and tear-downs for each test?
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
"WriteAndRead" should {
"work once" in {
new FileWriter(file, false).append("Foo").close
Source.fromFile(file).getLines().toList(0) must_== "Foo"
}
"work twice" in {
new FileWriter(file, false).append("Bar").close
Source.fromFile(file).getLines().toList(0) must_== "Bar"
}
}
trait TearDown extends After {
def after = if(file.exists) file.delete
}
}

In addition to that is written about sbt above, you must know that specs2 runs all the examples of your specifications concurrently by default.
You can still declare that, for a given specification, the examples must be executed sequentially. To do that, you simply add sequential to the beginning of your specification:
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
sequential
"WriteAndRead" should {
...
}
}

Fixed sequence of tests for suites can lead to interdependency of test cases and burden in maintenance.
I would prefer to test without touching the file system (no matter either it is business logic or serialization code), or if it is inevitable (as for testing integration with file feeds) then would use creating temporary files:
// Create temp file.
File temp = File.createTempFile("pattern", ".suffix");
// Delete temp file when program exits.
temp.deleteOnExit();

The wiki link Pablo Fernandez gave in his answer is pretty good, though there's a minor error in the example that might throw one off (though, being a wiki, I can and did correct it). Here's a project/Build.scala that actually compiles and produces the expected filters, though I didn't actually try it out with tests.
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( Serial )
.settings( inConfig(Serial)(Defaults.testTasks) : _*)
.settings(
libraryDependencies ++= specs,
testOptions in Test := Seq(Tests.Filter(parFilter)),
testOptions in Serial := Seq(Tests.Filter(serialFilter))
)
.settings( parallelExecution in Serial := false : _*)
def parFilter(name: String): Boolean = !(name startsWith "WriteAndReadSpec")
def serialFilter(name: String): Boolean = (name startsWith "WriteAndReadSpec")
lazy val Serial = config("serial") extend(Test)
lazy val specs = Seq(
"org.specs2" %% "specs2" % "1.6.1",
"org.specs2" %% "specs2-scalaz-core" % "6.0.1" % "test"
)
}

There seems to be a third option, which is grouping the serial tests in a configuration and running them separately while running the rest in parallel.
Check this wiki, look for "Application to parallel execution".

Other answers explained how to use make them run sequential.
While they're valid answers, in my opinion it's better to change your tests to let them run in parallel. (if possible)
In your example - use different files for each test.
If you have DB involved - use different (or random) users (or whatever isolation you can)
etc ...

Related

How to override libraryDependencies in a sbt plugin?

How would one override the libraryDependencies ?
I tried:
Keys.libraryDependencies in Compile := {
val libraryDependencies = (Keys.libraryDependencies in Compile).value
val allLibraries = UpdateDependencies(libraryDependencies)
allLibraries
}
So that seem to work, when I add print statement, the allLibraries is correct.
However, in the next steps, it doesn't seem to have the right values:
Keys.update in Compile := Def.taskDyn {
val u = (Keys.update in Compile).value
Def.task {
val allModules= u.configurations.flatMap(_.allModules)
log.info(s"Read ${allModules.size} modules:")
u
}
}.value
The print statement only have a few modules instead of all the one I would have added in the previous step.
Anyone have a solution ? Thanks !
So I understand where my problem was.
I was not understanding correctly how settings and tasks were working together.
settings are only evaluated once when sbt start.
and tasks are only evaluated once when sbt start a task / command which will require it.
So you cannot read and then rewrite settings like that.
It was so convoluted, I even wrote a whole article about it

How to do Slick configuration via application.conf from within custom sbt task?

I want to create an set task which creates a database schema with slick. For that, I have a task object like the following in my project:
object CreateSchema {
val instance = Database.forConfig("localDb")
def main(args: Array[String]) {
val createFuture = instance.run(createActions)
...
Await.ready(createFuture, Duration.Inf)
}
}
and in my build.sbt I define a task:
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "sbt.CreateSchema")
which gets executed as expected when I run sbt createSchema from the command line.
However, the problem is that application.conf doesn't seem to get taken into account (I've also tried different scopes like Compile or Test). As a result, the task fails due to com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'localDb'.
How can I fix this so the configuration is available?
I found a lot of questions here that deal with using the application.conf inside the build.sbt itself, but that is not what I need.
I have setup a little demo using SBT 0.13.8 and Slick 3.0.0, which is working as expected. (And even without modifying "-Dconfig.resource".)
Files
./build.sbt
name := "SO_20150915"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"com.typesafe" % "config" % "1.3.0" withSources() withJavadoc(),
"com.typesafe.slick" %% "slick" % "3.0.0",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"com.h2database" % "h2" % "1.3.175"
)
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "somefun.CallMe")
./project/build.properties
sbt.version = 0.13.8
./src/main/resources/reference.conf
hello {
world = "buuh."
}
h2mem1 = {
url = "jdbc:h2:mem:test1"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
./src/main/scala/somefun/CallMe.scala
package somefun
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory
import slick.driver.H2Driver.api._
/**
* SO_20150915
* Created by martin on 15.09.15.
*/
object CallMe {
def main(args: Array[String]) : Unit = {
println("Hello")
val settings = new Settings()
println(s"Settings read from hello.world: ${settings.hw}" )
val db = Database.forConfig("h2mem1")
try {
// ...
println("Do something with your database.")
} finally db.close
}
}
class Settings(val config: Config) {
// This verifies that the Config is sane and has our
// reference config. Importantly, we specify the "di3"
// path so we only validate settings that belong to this
// library. Otherwise, we might throw mistaken errors about
// settings we know nothing about.
config.checkValid(ConfigFactory.defaultReference(), "hello")
// This uses the standard default Config, if none is provided,
// which simplifies apps willing to use the defaults
def this() {
this(ConfigFactory.load())
}
val hw = config.getString("hello.world")
}
Result
If running sbt createSchema from Console I obtain the output
[info] Loading project definition from /home/.../SO_20150915/project
[info] Set current project to SO_20150915 (in build file:/home/.../SO_20150915/)
[info] Running somefun.CallMe
Hello
Settings read from hello.world: buuh.
Do something with your database.
[success] Total time: 1 s, completed 15.09.2015 10:42:20
Ideas
Please verify that this unmodified demo project also works for you.
Then try changing SBT version in the demo project and see if that changes something.
Alternatively, recheck your project setup and try to use a higher version of SBT.
Answer
So, even if your code resides in your src-folder, it is called from within SBT. That means, you are trying to load your application.conf from within the classpath context of SBT.
Slick uses Typesafe Config internally. (So the approach below (described in background) is not applicable, as you can not modify the Config loading mechanism itself).
Instead try the set the path to your application.conf explicitly via config.resource, see typesafe config docu (search for config.resource)
Option 1
Either set config.resource (via -Dconfig.resource=...) before starting sbt
Option 2
Or from within build.sbt as Scala code
sys.props("config.resource") = "./src/main/resources/application.conf"
Option 3
Or create a Task in SBT via
lazy val configPath = TaskKey[Unit]("configPath", "Set path for application.conf")
and add
configPath := Def.task {
sys.props("config.resource") = "./src/main/resources/application.conf"
}
to your sequence of settings.
Please let me know, if that worked.
Background information
Recently, I was writing a custom plugin for SBT, where I also tried to access a reference.conf as well. Unfortunately, I was not able to access any of .conf placed within project-subfolder using the default ClassLoader.
In the end I created a testenvironment.conf in project folder and used the following code to load the (typesafe) config:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./project/").toURI.toURL ) )
ConfigFactory.load(classLoader, "testenvironment")
}
or for loading a genereal application.conf from ./src/main/resources:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./src/main/resources/").toURI.toURL ) )
// no .conf basename given, so look for reference.conf and application.conf
// use specific classLoader
ConfigFactory.load(classLoader)
}

SBT: how to modify it:test with a task that calls the original test task?

In the snippet of my Build.scala file below, the itTestWithService task starts a test server before and after running the integration tests.
I'd like to attach this itTestWithService task to the it:test key. But how?
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
// I'd like the following but it creates a cycle that fails at runtime:
// test in IntegrationTest <<= testWithService
itTestWithService <<= testWithService
)
val itTestWithService = taskKey[Unit]("run integration test with background server")
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithService = Def.task {
val log = streams.value.log
val launched = (start in testService).value
launched match {
case Success(_) =>
testAndStop.value
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
log.error(s"failed to start test server: $e \n ${stack}")
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
val _ = (test in IntegrationTest).value
stop in testService
}
In a related github issue discussion, Josh suggested an approach to the particular case I asked about (overriding it:test with a task that calls the original test task).
The approach works by reimplementing the test task. I don't know if there's a more general way to get access to the original version of the task. (A more general way would be a better answer than this one!)
Here's how to reimplement the it:test task:
/** run integration tests (just like it:test does, but explicitly so we can overwrite the it:test key */
lazy val itTestTask: Initialize[Task[Unit]] = Def.taskDyn {
for {
results <- (executeTests in IntegrationTest)
} yield { Tests.showResults(streams.value.log, results, "test missing?") }
}
Here's the composite integration test task (evolved slighly from the original question, though that version should work too):
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithServiceTask = Def.taskDyn {
(start in testService).value match {
case Success(_) =>
testAndStop
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
streams.value.log.error(s"failed to start test server: $e \n ${stack}")
emptyTask
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
SbtUtil.sequence(itTestTask, stop in testService)
}
val emptyTask = Def.task {}
And now plugging the composite task we've built into the it:test key doesn't create a cycle:
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
test in IntegrationTest <<= testWithServiceTask,
)
You can add custom test tags in build.scala. Here's the sample code from one of my projects. Keep in mind, you don't have to bind it to it:test. You could name it anything you want.
lazy val AcceptanceTest = config("acc") extend(Test)
lazy val Kernel = Project(
id = "kernel",
base = file("."),
settings = defaultSettings ++ AkkaKernelPlugin.distSettings ++ Seq(
libraryDependencies ++= Dependencies.Kernel,
distJvmOptions in Dist := "-Xms256M -Xmx2048M",
outputDirectory in Dist := file("target/dist"),
distMainClass in Dist := "akka.kernel.Main system.SystemKernel"
)
).configs(AcceptanceTest)
.settings(inConfig(AcceptanceTest)(Defaults.testTasks): _*)
.settings(testOptions in AcceptanceTest := Seq(Tests.Argument("-n",
"AcceptanceTest"), Tests.Argument("-oD")))
Just pay attention to the lazy val at the top and the .configs part.
With that setup, when I type acc:test, it runs all tests with the AcceptanceTestTag. And you can just boot the server as part of what gets called by your test suite. Can even tag tests by needing server and not needing it so you can separate them once your suite gets big and takes longer to run.
Edit: Added to respond to comments.
To Tag tests, create a tag like this
import org.scalatest.Tag
object AcceptanceTest extends Tag("AcceptanceTest")
then put it here in your tests...
it("should allow any actor to subscribe to any channel", AcceptanceTest) {
And that cooresponds to the same build setup above. Only tests with That tag will be run when I call acc:test.
What came to mind for your problem was the solution I use for that same situation. Right now, you're doing the work in build.scala. I'm not sure if it's possible to do what you're saying right there... but what I do does the same thing but a bit differently. I have a trait that I mix-in to all tests that need a vagrant server. And I tag the tests that use it as VagrantTest.
And it works like a singleton. If one or more tests need it, it boots. But it will only boot one and all tests use it.
You could try doing the same, but over-ride it:test in the config file. Instead of "acc" in the example above, put "it". If that's not quite what you're looking for, may have to see if anyone else pops in.
So, basically when I call it:test it accesses running all tests with IntegrationTest test tag on it (as well as those with VagrantTest and a few others). So all the longer-running server tests don't get run as much (takes too long).

How to measure and display the running time of a single test?

I have a potentially long-running test written with scalatest:
test("a long running test") {
failAfter(Span(60, Seconds)) {
// ...
}
}
Even if the test finishes within the timeout limit, its running time can be valuable to the person who runs the test. How can I measure and display the running time of this single particular test in scalatest's output?
Update: Currently I measure the running time with my own function, just as in r.v's answer. I'd like to know if scalatest offers this functionality already.
the -oD option will give the duration of the test. For example, I use the following in my build.sbt.
testOptions in Test += Tests.Argument("-oD")
EDIT:
You can also use the following for individual runs:
> test-only org.acme.RedSuite -- -oD
See http://www.scalatest.org/user_guide/using_scalatest_with_sbt.
Moreover, you can define the following function for general time measurements:
def time[T](str: String)(thunk: => T): T = {
print(str + "... ")
val t1 = System.currentTimeMillis
val x = thunk
val t2 = System.currentTimeMillis
println((t2 - t1) + " msecs")
x
}
and use it anywhere (not dependent on ScalaTest)
test("a long running test") {
time("test running"){
failAfter(Span(60, Seconds)) {
// ...
}
}
In addition to r.v.'s answer:
If you have multiproject builds the testOptions in Test += Tests.Argument("-oD") does not work on root level in build.sbt, because Test refers to src/test/scala. You have to put it inside your Sub-Project settings
Project(id = "da_project", base = file("da_project"))
.settings(
testOptions in Test += Tests.Argument("-oDG"),
libraryDependencies ++= Seq(
...
)
)

Make ScalaCheck tests deterministic

I would like to make my ScalaCheck property tests in my specs2 test suite deterministic, temporarily, to ease debugging. Right now, different values could be generated each time I re-run the test suite, which makes debugging frustrating, because you don't know if a change in observed behaviour is caused by your code changes, or just by different data being generated.
How can I do this? Is there an official way to set the random seed used by ScalaCheck?
I'm using sbt to run the test suite.
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
If you're using pure ScalaCheck properties, you should be able to use the Test.Params class to change the java.util.Random instance which is used and provide your own which always return the same set of values:
def check(params: Test.Parameters, p: Prop): Test.Result
[updated]
I just published a new specs2-1.12.2-SNAPSHOT where you can use the following syntax to specify your random generator:
case class MyRandomGenerator() extends java.util.Random {
// implement a deterministic generator
}
"this is a specific property" ! prop { (a: Int, b: Int) =>
(a + b) must_== (b + a)
}.set(MyRandomGenerator(), minTestsOk -> 200, workers -> 3)
As a general rule, when testing on non-deterministic inputs you should try to echo or save those inputs somewhere when there's a failure.
If the data is small, you can include it in the label or error message that gets shown to the user; for example, in an xUnit-style test: (since I'm new to Scala syntax)
testLength(String x) {
assert(x.length > 10, "Length OK for '" + x + "'");
}
If the data is large, for example an auto-generated DB, you might either store it in a non-volatile location (eg. /tmp with a timestamped name) or show the seed used to generate it.
The next step is important: take that value, or seed, or whatever, and add it to your deterministic regression tests, so that it gets checked every time from now on.
You say you want to make ScalaCheck deterministic "temporarily" to reproduce this issue; I say you've found a buggy edge-case which is well-suited to becoming a unit test (perhaps after some manual simplification).
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
From specs2-scalacheck version 4.6.0 this is now a default behaviour:
Given the test file HelloSpec:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
build.sbt config:
import Dependencies._
ThisBuild / scalaVersion := "2.13.0"
ThisBuild / version := "0.1.0-SNAPSHOT"
ThisBuild / organization := "com.example"
ThisBuild / organizationName := "example"
lazy val root = (project in file("."))
.settings(
name := "specs2-scalacheck",
libraryDependencies ++= Seq(
specs2Core,
specs2MatcherExtra,
specs2Scalacheck
).map(_ % "test")
)
project/Dependencies:
import sbt._
object Dependencies {
lazy val specs2Core = "org.specs2" %% "specs2-core" % "4.6.0"
lazy val specs2MatcherExtra = "org.specs2" %% "specs2-matcher-extra" % specs2Core.revision
lazy val specs2Scalacheck = "org.specs2" %% "specs2-scalacheck" % specs2Core.revision
}
When you run the test from the sbt console:
sbt:specs2-scalacheck> testOnly example.HelloSpec
You get the following output:
[info] HelloSpec
[error] x a simple property
[error] Falsified after 2 passed tests.
[error] > ARG_0: "\u0000"
[error] > ARG_0_ORIGINAL: "猹"
[error] The seed is X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
[error]
[error] > '' != '' (HelloSpec.scala:11)
[info] Total for specification HelloSpec
To reproduce that specific run (i.e with the same seed)You can take the seed from the output and pass it using the command line scalacheck.seed:
sbt:specs2-scalacheck>testOnly example.HelloSpec -- scalacheck.seed X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
And this produces the same output as before.
You can also set the seed programmatically using setSeed:
def ex1 = prop((s: String) => s.reverse.reverse must_== "").setSeed("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=")
Yet another way to provide the Seed is pass an implicit Parameters where the seed is set:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
import org.scalacheck.rng.Seed
import org.specs2.scalacheck.Parameters
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
implicit val params = Parameters(minTestsOk = 1000, seed = Seed.fromBase64("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=").toOption)
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
Here is the documentation about all those various ways.
This blog also talks about this.
For scalacheck-1.12 this configuration worked:
new Test.Parameters {
override val rng = new scala.util.Random(seed)
}
For scalacheck-1.13 it doesn't work anymore since the rng method is removed. Any thoughts?