I start some docker containers before executing run to start my play-framework project:
run in Compile := (run in Compile dependsOn(dockerComposeUp)).evaluated
Now I'd like to tear down all docker containers using dockerComposeDown when play stops. Any ideas on how to accomplish on this?
I've already gone through Doing something after an input task, but that starts the containers and immediatly stops them again. (In fact it even stops the containers before starting them.) Here is what I tried:
run in Compile := {
(run in Compile dependsOn(dockerComposeUp)).evaluated
dockerComposeDown.value
}
A different approach is to call your docker task sequentially to run task. You could achieve this as described below:
lazy val testPrint = taskKey[Unit]("showTime")
testPrint := {
println("Test print.")
}
lazy val testRun = taskKey[Unit]("test build")
testRun := {
Def.sequential((runMain in Compile).toTask(" com.mycompany.MainClass "), testPrint).value
}
First define the testPrint task which in your case could be the dockerTask and then define testRun which will run both tasks sequentially. To run this just do sbt testRun. After execution it should print out "Test print."
Related
my e2e test task sends some http requests to the server. i want to start that server (Play framework based) on a separate jvm, then start the test which hits the server and let it finish, then stop the server.
i looked through many SO threads so far found these options:
use sbt-sequential
use sbt-revolver
use alias
but in my experiments setting fork doesn't work, i.e. it still blocks execution when server is started
fork := true
fork in run := true
fork in Test := true
fork in IntegrationTest := true
The startServer/stopServer examples in sbt docs are also blocking it seems
I also tried just starting the server in background from shell but server is quickly shut down, similar to this question
nohup sbt -Djline.terminal=jline.UnsupportedTerminal web/run < /dev/null > /tmp/sbt.log 2>&1 &
related questions:
scala sbt test run setup and cleanup command once on multi project
How do I start a server before running a test suite in SBT?
fork doesn't run task in parallel - it just makes sure that tests are run in a separate JVM which helps with things like shutdown webhooks or disconnecting from services that doesn't handle resource release properly (e.g. DB connection that never calls disconnect).
If you want to use the same sbt to start server AND run test against that instance (which sounds like easily breakable antipattern BTW) you can use somethings like:
reStart
it:test
reStop
However that would be tricky because reStart yields immediately so tests would start when the server setup started but not necessarily completed. Race condition, failing tests, or blocking all tests until server finishes starting.
This is why nobody does it. Much easier to handle solution is to:
start the server in test in some beforeAll method and make this method complete only after server is responding to queries
shutdown it in some afterAll method (or somehow handle both of these using something like cats.effect.Resource or similar)
depending on situation:
running tests sequentially to avoid starting two instances at the same time or
generating config for each test so that they could be run in parallel without clashing on ports allocations
Anything else is just a hack that is going to fail sooner rather than later.
answering my own question, what we ended up doing is
use "sbt stage" to create standalone server jar & run script for the Play web app (in web/target/universal/stage/bin/)
create run_integration_tests.sh shell script that starts the server, waits 30 sec and starts test
add runIntegrationTests task in build.sbt which calls run_integration_tests.sh, and add it to it:test
run_integration_tests.sh:
#! /bin/bash
CURDIR=$(pwd)
echo "Starting integration/e2e test runner"
date >runner.log
export JAVA_OPTS="-Dplay.server.http.port=9195 -Dconfig.file=$CURDIR/web/conf/application_test.conf -Xmx2G"
rm -f "$CURDIR/web/target/universal/stage/RUNNING_PID"
echo "Starting server"
nohup web/target/universal/stage/bin/myapp >>runner.log 2>&1 &
echo "Webserver PID is $pid"
echo "Waiting for server start"
sleep 30
echo "Running the tests"
sbt "service/test:run-main com.blah.myapp.E2ETest"
ERR="$?"
echo "Tests Done at $(date), killing server"
kill $pid
echo "Waiting for server exit"
wait $pid
echo "All done"
if [ $ERR -ne 0 ]; then
cat runner.log
exit "$ERR"
fi
build.sbt:
lazy val runIntegrationTests = taskKey[Unit]("Run integration tests")
runIntegrationTests := {
val s: TaskStreams = streams.value
s.log.info("Running integration tests...")
val shell: Seq[String] = Seq("bash", "-c")
val runTests: Seq[String] = shell :+ "./run_integration_tests.sh"
if ((runTests !) == 0) {
s.log.success("Integration tests successful!")
} else {
s.log.error("Integration tests failed!")
throw new IllegalStateException("Integration tests failed!")
}
}
lazy val root = project.in(file("."))
.aggregate(service, web, tools)
.configs(IntegrationTest)
.settings(Defaults.itSettings)
.settings(
publishLocal := {},
publish := {},
(test in IntegrationTest) := (runIntegrationTests dependsOn (test in IntegrationTest)).value
)
calling sbt in CI/jenkins:
sh 'sbt clean coverage test stage it:test'
I know, I saw Run custom task automatically before/after standard task but it seems outdated. I also found SBT before/after hooks for a task but it does not have any code example.
I am on SBT 0.13.17.
So I want to run my task MyBeforeTask and MyAfterTask automatically after an other tasks, says Compile.
So when you do sbt compile I would like to see:
...log...
This is my before test text
...compile log...
This is my after test text
So I would need to have:
object MyPlugin extends AutoPlugin {
object autoImport {
val MyBeforeTask = taskKey[Unit]("desc...")
val MyAfterTask = taskKey[Unit]("desc...")
}
import autoImport._
override def projectSettings: Seq[Def.Setting[_]] = {
MyBeforeTask := {
println("This is my before test text")
},
MyAfterTask := {
println("This is my after test text")
}
}
}
So I think I need things like dependsOn and in but I am not sure how to set them up.
It is not possible to configure for a particular task to run after the given task, because that's not how the task dependencies model works - when you specify the task, its dependencies and itself will be executed, but there is no way to define an "after" dependency. However, you can simulate that with dynamic tasks.
To run some task before another, you can use dependsOn:
compile in Compile := (compile in Compile).dependsOn(myBeforeTask).value
This establishes a dependency between two tasks, which ensures that myBeforeTask will be run before compile in Compile.
Note that there is a more generic way to make multiple tasks run one after another:
aggregateTask := Def.sequential(task1, task2, task3, task4).value
Def.sequential relies on the dynamic tasks machinery, which sets up dependencies between tasks at runtime. However, there are some limitations to this mechanism, in particular, you cannot reference the task being defined in the list of tasks to execute, so you can't use Def.sequential to augment existing tasks:
compile in Compile := Def.sequential(myBeforeTask, compile in Compile).value
This definition will fail at runtime with a strange error message which basically means that you have a loop in your task dependencies graph. However, for some use cases it is extremely useful.
To run some task after another, however, you have to resort to defining a dynamic task dependency using Def.taskDyn:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.value
Def.taskDyn accepts a block which must return a Def.Initialize[Task[T]], which will be used to instantiate a task to be run later, after the main body of Def.taskDyn completes. This allows one to compute tasks dynamically, and establish dependencies between tasks at runtime. As I said above, however, this can result in very strange errors happening at runtime, which are usually caused by loops in the dependency graph.
Therefore, the full example, with both "before" and "after" tasks, would look like this:
compile in Compile := Def.taskDyn {
val result = (compile in Compile).value
Def.task {
val _ = myAfterTask.value
result
}
}.dependsOn(myBeforeTask).value
In my SBT (0.13.16) build, I have the following task:
startThing := {
var bin_path = s"${file(".").getAbsolutePath}/bin"
val result = s"$bin_path/start_thing".!
if (result != 0)
throw new RuntimeException("Could not start Thing..")
true
}
And start_thing contains:
(run_subprocess &)
and my build hangs.
I can see that start_thing exits (the process table does not have it as an entry) but adding some printlns to the task shows that it's stuck on val result = s"$bin_path/start_thing".!.
If I kill the run_subprocess process then SBT unblocks and runs normally.
In this particular case, run_subprocess has set up some Kubernetes port-forwarding that needs to be there in order for subsequent tests to work.
Try daemonising the background process like so
(run_subprocess >/dev/null 2>&1 &)
The issue could be output from run_subprocess still going to sbt parent as suggested here.
I was able to replicate the issue in both sbt 0.13.17 and 1.0.2. Daemonising worked in both.
Regardless of my comment, in my case the reason for the hanging was ACTUALLY the potential leaving of open STDOUT , STDERR handles in a daemon started by the script , that is OK:
/usr/local/bin/minio server "$minio_data_dir" > /dev/null 2>&1 & # and start the server
and NOT ok:
/usr/local/bin/minio server "$minio_data_dir" 2>&1 & # and start the server
So the hanging occurred randomly EVEN with the accepted answers start in the background ...
Thus this solution needed NOT, any wrapper bash scripts ... This is how the code looked in the build.sbt
lazy val startLocalS3 = inputKey[Unit]("localS3")
lazy val startLocalS3Task = TaskKey[Unit]("localS3", "create local s3")
lazy val core: Project = project
.in(file("."))
.settings(
name := "rfco-core",
startLocalS3Task := {
val cmd: Seq[String] = Seq("bash" , "-c" , "./CI/start-s3-svr.sh")
import sys.process._
cmd.mkString(" ").!!
},
fork in startLocalS3Task := true,
compile.in(Compile) := (compile in Compile).dependsOn((startLocalS3Task)).value
// you might want to use Test scope ^^ here
)
My tests are slow. Real slow. Like I can get another cup of coffee and reading some articles while waiting for them to finished slow. So I added this task to build.sbt just to alert me when my testing is finished.
lazy val alertMe = taskKey[Unit]("Alert me when testing is completed.")
alertMe in Test := {
"say \"testing is completed\""!
}
Noted that I use say command on OS X. I then used this task like this.
;test ;alertMe
Voila! This works great.... only for successful testing. In case that any test case failed, test task return result as error, and alertMe is not invoked.
This behavior is pretty understandable. but I want my task, alert me, to run regardless of test task result. How can I do this ?
Maybe you can add test task in alertMe task, like:
lazy val alertMe = taskKey[Unit]("Alert me when testing is completed.")
alertMe := {
Command.process("test", state.value)
"say \"testing is completed\""!
}
usage: sbt alertme, it will run the test task and the shell command.
Command.process will execute the test task and without causing current task fail. so the commands always will be executed.
I want to define a compound task in sbt so that all tasks that are run in my CI job can be executed in a single commmand. For example at the moment I am running:
clean coverage test scalastyle coverageReport package
However I'd like to just run
ci
Which would effectively be an alias to all of the above tasks. Furthermore I'd like to define this in a scala file (as opposed to build.sbt) so I can include it in an already existing common scala plugin and thus it becomes availbale to all my projects.
So far (after much reading of the docs) I've managed to get a task that depends just on scalastyle by doing:
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
println("In the CI task")
}
however if I attempt to add another task (say the publish task) e.g:
ci := {
val scalastyleResult = (scalastyle in Compile).evaluated
val publishResult = (publish in Compile).evaluated
println("In the CI task")
}
this fails with:
[error] [build.sbt]:52: illegal start of simple expression
[error] [build.sbt]:55: ')' expected but '}' found.
My first question is whether this approach is indeed the correct way to define a compound task.
If this is the case, then how can I make the ci task depend on all the tasks mentioned.
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Put a blank space between statements
lazy val ci = inputKey[Unit]("Prints 'Runs All tasks for CI")
ci := {
Also, know that SBT will run your dependent tasks of ci in parallel. Sometimes this is good, but not always, for example in your clean.
There are several ways to run tasks in sequence.
One way:
commands += Command.command("ci") {
"clean" ::
"coverage" ::
"test" ::
"scalastyle" ::
"coverageReport" ::
_
}