Sbt plugin run tasks before / after an other task - scala

I know, I saw Run custom task automatically before/after standard task but it seems outdated. I also found SBT before/after hooks for a task but it does not have any code example.
I am on SBT 0.13.17.
So I want to run my task MyBeforeTask and MyAfterTask automatically after an other tasks, says Compile.
So when you do sbt compile I would like to see:
...log...
This is my before test text
...compile log...
This is my after test text
So I would need to have:
object MyPlugin extends AutoPlugin {
object autoImport {
val MyBeforeTask = taskKey[Unit]("desc...")
val MyAfterTask = taskKey[Unit]("desc...")
}
import autoImport._
override def projectSettings: Seq[Def.Setting[_]] = {
MyBeforeTask := {
println("This is my before test text")
},
MyAfterTask := {
println("This is my after test text")
}
}
}
So I think I need things like dependsOn and in but I am not sure how to set them up.

It is not possible to configure for a particular task to run after the given task, because that's not how the task dependencies model works - when you specify the task, its dependencies and itself will be executed, but there is no way to define an "after" dependency. However, you can simulate that with dynamic tasks.
To run some task before another, you can use dependsOn:
compile in Compile := (compile in Compile).dependsOn(myBeforeTask).value
This establishes a dependency between two tasks, which ensures that myBeforeTask will be run before compile in Compile.
Note that there is a more generic way to make multiple tasks run one after another:
aggregateTask := Def.sequential(task1, task2, task3, task4).value
Def.sequential relies on the dynamic tasks machinery, which sets up dependencies between tasks at runtime. However, there are some limitations to this mechanism, in particular, you cannot reference the task being defined in the list of tasks to execute, so you can't use Def.sequential to augment existing tasks:
compile in Compile := Def.sequential(myBeforeTask, compile in Compile).value
This definition will fail at runtime with a strange error message which basically means that you have a loop in your task dependencies graph. However, for some use cases it is extremely useful.
To run some task after another, however, you have to resort to defining a dynamic task dependency using Def.taskDyn:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.value
Def.taskDyn accepts a block which must return a Def.Initialize[Task[T]], which will be used to instantiate a task to be run later, after the main body of Def.taskDyn completes. This allows one to compute tasks dynamically, and establish dependencies between tasks at runtime. As I said above, however, this can result in very strange errors happening at runtime, which are usually caused by loops in the dependency graph.
Therefore, the full example, with both "before" and "after" tasks, would look like this:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.dependsOn(myBeforeTask).value

Related

How to make SBT run test suites in parallel?

I have a bunch of integration tests running by sbt, given test N suites with 1..M tests per each suite.
I have set fork in IntegrationTest := true, but test suites are always executed sequentially. According to the docs, this must not be the case: test suites should be executed concurrently.
the test suites are a class as following:
class MyTestSuite1 extends FlatSpec with Matchers
...
it should "do A" {}
it should "do B" {}
class MyTestSuite2 extends FlatSpec with Matchers
...
it should "do C" {}
it should "do D" {}
the problem
MyTestSuite1 and MyTestSuiteN are executed sequentially (by the alphabet order to be exact)
expectation
MyTestSuite1 and MyTestSuiteM are executed concurrently
env
.sbopts:
-J-Xms1G
-J-Xmx4G
-J-XX:MaxMetaspaceSize=512m
-J-Xss4M
note
I noticed that all test are running using the same pool and thread, for example, pool-1-thread-1 for all tests.
sbt version: 1.2.8
Scala: 2.12.8
os: MacOS 10.15, Ubuntu 19.04
Scalatest ver: 3.2.0-SNAP10
Tried sbt v. 1.3.2 - same result.
Adding
testOptions in IntegrationTest += Tests.Argument(TestFrameworks.ScalaTest, "-P4"),
does not help.
============
Update
fork in(IntegrationTest, test) := true works on a global level, but I have 2 projects and I want to make it work to preserve relative path to the proj.
e.g.
lazy val `p1` = Project(id = "p1", base = file("./p1"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)
.settings(
fork in(IntegrationTest, test) := true,
...)
lazy val `p2` = Project(id = "p2", base = file("./p2"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)
.settings(
fork in(IntegrationTest, test) := true,
...)
does not run tests in parallel
instead, this runs in parallel, but, obviously, the home dir is set to be "." rather than to be "./p1" or "./p2" respectively:
fork in(IntegrationTest, test) := true
lazy val `p1` = Project(id = "p1", base = file("./p1"))
.configs(IntegrationTest)
.settings(Defaults.itSettings: _*)
SOLUTION:
It appears there's testForkedParallel in IntegrationTest := true option which does exactly what I needed - it spawns new JVM per test suite.
==============
Remark:
So, the only problem is that now it spawns JVMs as many as the count of all available CPUs and I can't funnel only test concurrency:
OPTION 1 - funnels all sbt processes to be only 4 in parallel
concurrentRestrictions in Global := Seq(Tags.limitAll(4))
OPTION 2 - just does nothing (test are in the subproject)
concurrentRestrictions in Global += Tags.limit(Tags.Test, 4),
By default tests executed in forked JVM are executed sequentially. Refer the following para from sbt testing docs:
The setting:
Test / fork := true
specifies that all tests will be executed in a single external JVM.
See Forking for configuring standard options for forking. By default,
tests executed in a forked JVM are executed sequentially. More control
over how tests are assigned to JVMs and what options to pass to those
is available with testGrouping key.
So, you have two options:
Do not fork JVM and your tests will run parallelly by default
If you want to fork and still rut tests parallelly go through this docs:
https://www.scala-sbt.org/1.x/docs/Testing.html#Forking+tests

How to run a custom Task before SBT's `publish` task in an AutoPlugin?

I have a task called prePublishCheck which ensures that the Git working tree isn't dirty before publishing a JAR to Artifactory, which is supposed to run before the publish task in an AutoPlugin.
How can I get SBT to run this task before publish?
The following code has no effect with at least sbt 1.1.5:
// Has no effect on `publish`
prePublishCheck := { throw new Exception("test") },
publish := (prePublishCheck before publish).value
// Has no effect on `publish`
prePublishCheck := { throw new Exception("test") },
prePublishCheck := (prePublishCheck before publish).value
These similar answers are obsolete:
Run custom task automatically before/after standard task

Run a task after run

I start some docker containers before executing run to start my play-framework project:
run in Compile := (run in Compile dependsOn(dockerComposeUp)).evaluated
Now I'd like to tear down all docker containers using dockerComposeDown when play stops. Any ideas on how to accomplish on this?
I've already gone through Doing something after an input task, but that starts the containers and immediatly stops them again. (In fact it even stops the containers before starting them.) Here is what I tried:
run in Compile := {
(run in Compile dependsOn(dockerComposeUp)).evaluated
dockerComposeDown.value
}
A different approach is to call your docker task sequentially to run task. You could achieve this as described below:
lazy val testPrint = taskKey[Unit]("showTime")
testPrint := {
println("Test print.")
}
lazy val testRun = taskKey[Unit]("test build")
testRun := {
Def.sequential((runMain in Compile).toTask(" com.mycompany.MainClass "), testPrint).value
}
First define the testPrint task which in your case could be the dockerTask and then define testRun which will run both tasks sequentially. To run this just do sbt testRun. After execution it should print out "Test print."

Define Compound Task in SBT

I want to define a compound task in sbt so that all tasks that are run in my CI job can be executed in a single commmand. For example at the moment I am running:
clean coverage test scalastyle coverageReport package
However I'd like to just run
ci
Which would effectively be an alias to all of the above tasks. Furthermore I'd like to define this in a scala file (as opposed to build.sbt) so I can include it in an already existing common scala plugin and thus it becomes availbale to all my projects.
So far (after much reading of the docs) I've managed to get a task that depends just on scalastyle by doing:
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
println("In the CI task")
}
however if I attempt to add another task (say the publish task) e.g:
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
val publishResult = (publish in Compile).evaluated
println("In the CI task")
}
this fails with:
[error] [build.sbt]:52: illegal start of simple expression
[error] [build.sbt]:55: ')' expected but '}' found.
My first question is whether this approach is indeed the correct way to define a compound task.
If this is the case, then how can I make the ci task depend on all the tasks mentioned.
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Put a blank space between statements
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Also, know that SBT will run your dependent tasks of ci in parallel. Sometimes this is good, but not always, for example in your clean.
There are several ways to run tasks in sequence.
One way:
commands += Command.command("ci") {
"clean" ::
"coverage" ::
"test" ::
"scalastyle" ::
"coverageReport" ::
_
}

Uploading generated resources to S3 in stage task?

I have a custom plugin that generates some resource artifacts in a task genWro. My plugin also sets:
resourceGenerators in Compile <+= genWro in Compile
that takes care of creating the resources during packaging.
I want to then take the resulting Seq[java.util.File] that resourceGenerators returns and upload them to S3 during sbt-native-packager's stage task.
I'd actually like to only generate these resources and upload them during stage, and not during packaging - but one thing at a time. Basically, I only want to generate and upload the files during my production build which calls sbt clean stage and skip it during local development which I only need to call sbt run.
Given the comment where you said "My main concern is how to get the Seq[File] result from the task into a custom task that uploads those files to S3, and to have that custom task called only if stage is invoked.", the answer could be as follows.
You've got the stage task that comes from the sbt-native-packager plugin.
> help stage
Create a local directory with all the files laid out as they would be in the final distribution.
You've also another task, say s3deploy, that transfers resourceGenerators to S3.
lazy val s3deploy = taskKey[Unit]("Deploys files to S3")
s3deploy := {
println(s"Files to transfer: ${(resourceGenerators in Compile).value}")
}
And here comes a solution - wire s3deploy to stage:
(stage in Universal) := {
val _ = s3deploy.value
(stage in Universal).value
}
The entire build.sbt follows:
import com.typesafe.sbt.packager.Keys._
packageArchetype.java_application
lazy val s3deploy = taskKey[Unit]("Deploys files to S3")
s3deploy := {
println(s"Files to transfer: ${(resourceGenerators in Compile).value}")
}
(stage in Universal) := {
val _ = s3deploy.value
(stage in Universal).value
}