How to make SBT run test suites in parallel? - scala

I have a bunch of integration tests running by sbt, given test N suites with 1..M tests per each suite.
I have set fork in IntegrationTest := true, but test suites are always executed sequentially. According to the docs, this must not be the case: test suites should be executed concurrently.
the test suites are a class as following:
class MyTestSuite1 extends FlatSpec with Matchers
...
it should "do A" {}
it should "do B" {}
class MyTestSuite2 extends FlatSpec with Matchers
...
it should "do C" {}
it should "do D" {}
the problem
MyTestSuite1 and MyTestSuiteN are executed sequentially (by the alphabet order to be exact)
expectation
MyTestSuite1 and MyTestSuiteM are executed concurrently
env
.sbopts:
-J-Xms1G
-J-Xmx4G
-J-XX:MaxMetaspaceSize=512m
-J-Xss4M
note
I noticed that all test are running using the same pool and thread, for example, pool-1-thread-1 for all tests.
sbt version: 1.2.8
Scala: 2.12.8
os: MacOS 10.15, Ubuntu 19.04
Scalatest ver: 3.2.0-SNAP10
Tried sbt v. 1.3.2 - same result.
Adding
testOptions in IntegrationTest += Tests.Argument(TestFrameworks.ScalaTest, "-P4"),
does not help.
============
Update
fork in(IntegrationTest, test) := true works on a global level, but I have 2 projects and I want to make it work to preserve relative path to the proj.
e.g.
lazy val `p1` = Project(id = "p1", base = file("./p1"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)
.settings(
fork in(IntegrationTest, test) := true,
...)
lazy val `p2` = Project(id = "p2", base = file("./p2"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)
.settings(
fork in(IntegrationTest, test) := true,
...)
does not run tests in parallel
instead, this runs in parallel, but, obviously, the home dir is set to be "." rather than to be "./p1" or "./p2" respectively:
fork in(IntegrationTest, test) := true
lazy val `p1` = Project(id = "p1", base = file("./p1"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)

SOLUTION:
It appears there's testForkedParallel in IntegrationTest := true option which does exactly what I needed - it spawns new JVM per test suite.
==============
Remark:
So, the only problem is that now it spawns JVMs as many as the count of all available CPUs and I can't funnel only test concurrency:
OPTION 1 - funnels all sbt processes to be only 4 in parallel
concurrentRestrictions in Global := Seq(Tags.limitAll(4))
OPTION 2 - just does nothing (test are in the subproject)
concurrentRestrictions in Global += Tags.limit(Tags.Test, 4),

By default tests executed in forked JVM are executed sequentially. Refer the following para from sbt testing docs:
The setting:
Test / fork := true
specifies that all tests will be executed in a single external JVM.
See Forking for configuring standard options for forking. By default,
tests executed in a forked JVM are executed sequentially. More control
over how tests are assigned to JVMs and what options to pass to those
is available with testGrouping key.
So, you have two options:
Do not fork JVM and your tests will run parallelly by default
If you want to fork and still rut tests parallelly go through this docs:
https://www.scala-sbt.org/1.x/docs/Testing.html#Forking+tests

Related

Sequential run of tests in SBT: Are these configurations the same?

Are these 2 settings alternatives or are both necessary to run the tests sequentially ?
Global / concurrentRestrictions += Tags.limit(Tags.Test, 1)
And :
parallelExecution := false
Another question :
Does the first configuration guarantee to have only one task per all the tests whatever it is their Test Suite?

Sbt plugin run tasks before / after an other task

I know, I saw Run custom task automatically before/after standard task but it seems outdated. I also found SBT before/after hooks for a task but it does not have any code example.
I am on SBT 0.13.17.
So I want to run my task MyBeforeTask and MyAfterTask automatically after an other tasks, says Compile.
So when you do sbt compile I would like to see:
...log...
This is my before test text
...compile log...
This is my after test text
So I would need to have:
object MyPlugin extends AutoPlugin {
object autoImport {
val MyBeforeTask = taskKey[Unit]("desc...")
val MyAfterTask = taskKey[Unit]("desc...")
}
import autoImport._
override def projectSettings: Seq[Def.Setting[_]] = {
MyBeforeTask := {
println("This is my before test text")
},
MyAfterTask := {
println("This is my after test text")
}
}
}
So I think I need things like dependsOn and in but I am not sure how to set them up.
It is not possible to configure for a particular task to run after the given task, because that's not how the task dependencies model works - when you specify the task, its dependencies and itself will be executed, but there is no way to define an "after" dependency. However, you can simulate that with dynamic tasks.
To run some task before another, you can use dependsOn:
compile in Compile := (compile in Compile).dependsOn(myBeforeTask).value
This establishes a dependency between two tasks, which ensures that myBeforeTask will be run before compile in Compile.
Note that there is a more generic way to make multiple tasks run one after another:
aggregateTask := Def.sequential(task1, task2, task3, task4).value
Def.sequential relies on the dynamic tasks machinery, which sets up dependencies between tasks at runtime. However, there are some limitations to this mechanism, in particular, you cannot reference the task being defined in the list of tasks to execute, so you can't use Def.sequential to augment existing tasks:
compile in Compile := Def.sequential(myBeforeTask, compile in Compile).value
This definition will fail at runtime with a strange error message which basically means that you have a loop in your task dependencies graph. However, for some use cases it is extremely useful.
To run some task after another, however, you have to resort to defining a dynamic task dependency using Def.taskDyn:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.value
Def.taskDyn accepts a block which must return a Def.Initialize[Task[T]], which will be used to instantiate a task to be run later, after the main body of Def.taskDyn completes. This allows one to compute tasks dynamically, and establish dependencies between tasks at runtime. As I said above, however, this can result in very strange errors happening at runtime, which are usually caused by loops in the dependency graph.
Therefore, the full example, with both "before" and "after" tasks, would look like this:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.dependsOn(myBeforeTask).value

Multiple SBT Configurations should be exclusive, but they all activate at the same time - why?

I have defined a minimal build.sbt with two custom profiles ‘dev’ and ‘staging’ (what SBT seems to call Configurations). However, when I run SBT with the Configuration that was defined first in the file (dev), both Configuration blocks are executed - and if both modify the same setting, the last one wins (staging).
This seems to break any notion of conditional activation, so what am I doing wrong with SBT?
For reference, I want to emulate the conditionally activated Profiles concept of Maven e.g. mvn test -P staging.
SBT version: 1.2.1
build.sbt:
name := "example-project"
scalaVersion := "2.12.6"
...
fork := true
// Environment-independent JVM property (always works)
javaOptions += "-Da=b"
// Environment-specific JVM property (doesn’t work)
lazy val Dev = config("dev") extend Test
lazy val Staging = config("staging") extend Test
val root = (project in file("."))
.configs(Dev, Staging)
.settings(inConfig(Dev)(Seq(javaOptions in Test += "-Dfoo=bar")))
.settings(inConfig(Staging)(Seq(javaOptions in Test += "-Dfoo=qux")))
Command:
# Bad
sbt test
=> foo=qux
a=b
# Bad
sbt clean dev:test
=> foo=qux
a=b
# Good
sbt clean staging:test
=> foo=qux
a=b
Notice that despite of the inConfig usage you're still setting javaOptions in Test, i.e. in the Test config. If you remove in Test, it works as expected:
...
.settings(inConfig(Dev)(javaOptions += "-Dfoo=bar"))
.settings(inConfig(Staging)(javaOptions += "-Dfoo=qux"))
(also Seq(...) wrapping is unnecessary)
Now in sbt:
> show Test/javaOptions
[info] *
> show Dev/javaOptions
[info] * -Dfoo=bar
> show Staging/javaOptions
[info] * -Dfoo=qux
You can achieve the same result by scoping each setting explicitly (without inConfig wrapping):
.settings(
Dev/javaOptions += "-Dfoo=bar",
Staging/javaOptions += "-Dfoo=qux",
...
)
(here Conf/javaOptions is the same as javaOptions in Conf)

Define Compound Task in SBT

I want to define a compound task in sbt so that all tasks that are run in my CI job can be executed in a single commmand. For example at the moment I am running:
clean coverage test scalastyle coverageReport package
However I'd like to just run
ci
Which would effectively be an alias to all of the above tasks. Furthermore I'd like to define this in a scala file (as opposed to build.sbt) so I can include it in an already existing common scala plugin and thus it becomes availbale to all my projects.
So far (after much reading of the docs) I've managed to get a task that depends just on scalastyle by doing:
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
println("In the CI task")
}
however if I attempt to add another task (say the publish task) e.g:
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
val publishResult = (publish in Compile).evaluated
println("In the CI task")
}
this fails with:
[error] [build.sbt]:52: illegal start of simple expression
[error] [build.sbt]:55: ')' expected but '}' found.
My first question is whether this approach is indeed the correct way to define a compound task.
If this is the case, then how can I make the ci task depend on all the tasks mentioned.
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Put a blank space between statements
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Also, know that SBT will run your dependent tasks of ci in parallel. Sometimes this is good, but not always, for example in your clean.
There are several ways to run tasks in sequence.
One way:
commands += Command.command("ci") {
"clean" ::
"coverage" ::
"test" ::
"scalastyle" ::
"coverageReport" ::
_
}

How to pass scalacOptions (Xelide-below) to sbt via command line

I am trying to call sbt assembly from the command line passing it a scalac compiler flag to elides (elide-below 1).
I have managed to get the flag working in the build.sbt by adding this line to the build.sbt
scalacOptions ++= Seq("-Xelide-below", "1")
And also it's working fine when I start sbt and run the following:
$> sbt
$> set scalacOptions in ThisBuild ++=Seq("-Xelide-below", "0")
But I would like to know how to pass this in when starting sbt, so that my CI jobs can use it while doing different assembly targets (ie. dev/test/prod).
One way to pass the elide level as a command line option is to use system properties
scalacOptions ++= Seq("-Xelide-below", sys.props.getOrElse("elide.below", "0"))
and run sbt -Delide.below=20 assembly. Quick, dirty and easy.
Another more verbose way to accomplish the same thing is to define different commands for producing test/prod artifacts.
lazy val elideLevel = settingKey[Int]("elide code below this level.")
elideLevel in Global := 0
scalacOptions ++= Seq("-Xelide-below", elideLevel.value.toString)
def assemblyCommand(name: String, level: Int) =
Command.command(s"${name}Assembly") { s =>
s"set elideLevel in Global := $level" ::
"assembly" ::
s"set elideLevel in Global := 0" ::
s
}
commands += assemblyCommand("test", 10)
commands += assemblyCommand("prod", 1000)
and you can run sbt testAssembly prodAssembly. This buys you a cleaner command name in combination with the fact that you don't have to exit an active sbt-shell session to call for example testAssembly. My sbt-shell sessions tend to live for a long time so I personally prefer the second option.