SBT: how to modify it:test with a task that calls the original test task? - scala

In the snippet of my Build.scala file below, the itTestWithService task starts a test server before and after running the integration tests.
I'd like to attach this itTestWithService task to the it:test key. But how?
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
// I'd like the following but it creates a cycle that fails at runtime:
// test in IntegrationTest <<= testWithService
itTestWithService <<= testWithService
)
val itTestWithService = taskKey[Unit]("run integration test with background server")
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithService = Def.task {
val log = streams.value.log
val launched = (start in testService).value
launched match {
case Success(_) =>
testAndStop.value
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
log.error(s"failed to start test server: $e \n ${stack}")
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
val _ = (test in IntegrationTest).value
stop in testService
}

In a related github issue discussion, Josh suggested an approach to the particular case I asked about (overriding it:test with a task that calls the original test task).
The approach works by reimplementing the test task. I don't know if there's a more general way to get access to the original version of the task. (A more general way would be a better answer than this one!)
Here's how to reimplement the it:test task:
/** run integration tests (just like it:test does, but explicitly so we can overwrite the it:test key */
lazy val itTestTask: Initialize[Task[Unit]] = Def.taskDyn {
for {
results <- (executeTests in IntegrationTest)
} yield { Tests.showResults(streams.value.log, results, "test missing?") }
}
Here's the composite integration test task (evolved slighly from the original question, though that version should work too):
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithServiceTask = Def.taskDyn {
(start in testService).value match {
case Success(_) =>
testAndStop
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
streams.value.log.error(s"failed to start test server: $e \n ${stack}")
emptyTask
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
SbtUtil.sequence(itTestTask, stop in testService)
}
val emptyTask = Def.task {}
And now plugging the composite task we've built into the it:test key doesn't create a cycle:
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
test in IntegrationTest <<= testWithServiceTask,
)

You can add custom test tags in build.scala. Here's the sample code from one of my projects. Keep in mind, you don't have to bind it to it:test. You could name it anything you want.
lazy val AcceptanceTest = config("acc") extend(Test)
lazy val Kernel = Project(
id = "kernel",
base = file("."),
settings = defaultSettings ++ AkkaKernelPlugin.distSettings ++ Seq(
libraryDependencies ++= Dependencies.Kernel,
distJvmOptions in Dist := "-Xms256M -Xmx2048M",
outputDirectory in Dist := file("target/dist"),
distMainClass in Dist := "akka.kernel.Main system.SystemKernel"
)
).configs(AcceptanceTest)
.settings(inConfig(AcceptanceTest)(Defaults.testTasks): _*)
.settings(testOptions in AcceptanceTest := Seq(Tests.Argument("-n",
"AcceptanceTest"), Tests.Argument("-oD")))
Just pay attention to the lazy val at the top and the .configs part.
With that setup, when I type acc:test, it runs all tests with the AcceptanceTestTag. And you can just boot the server as part of what gets called by your test suite. Can even tag tests by needing server and not needing it so you can separate them once your suite gets big and takes longer to run.
Edit: Added to respond to comments.
To Tag tests, create a tag like this
import org.scalatest.Tag
object AcceptanceTest extends Tag("AcceptanceTest")
then put it here in your tests...
it("should allow any actor to subscribe to any channel", AcceptanceTest) {
And that cooresponds to the same build setup above. Only tests with That tag will be run when I call acc:test.
What came to mind for your problem was the solution I use for that same situation. Right now, you're doing the work in build.scala. I'm not sure if it's possible to do what you're saying right there... but what I do does the same thing but a bit differently. I have a trait that I mix-in to all tests that need a vagrant server. And I tag the tests that use it as VagrantTest.
And it works like a singleton. If one or more tests need it, it boots. But it will only boot one and all tests use it.
You could try doing the same, but over-ride it:test in the config file. Instead of "acc" in the example above, put "it". If that's not quite what you're looking for, may have to see if anyone else pops in.
So, basically when I call it:test it accesses running all tests with IntegrationTest test tag on it (as well as those with VagrantTest and a few others). So all the longer-running server tests don't get run as much (takes too long).

Related

How do I start a server before running a test suite in SBT?

Problem is that if I do test in Test <<= (taskA, taskB) { (A, B) => A doFinally B or test in Test := (taskB dependsOn taskA).value and taskA is forked, then sbt execution doesn't continue to taskB and get's stuck indefinitely. It's caused by doFinally/dependsOn, because they probably makes it a single-threaded sequential execution. But I can't find any other way to order those 2 tasks, to make them run sequentially.
So far I've gotten this far :
lazy val startServer = taskKey[Unit]("Start PingPong Server before running Scala-JS tests")
lazy val jvm =
project.in(file("jvm"))
fork in (Test, runMain) := true
startServer := {
(runMain in Test).toTask(" com.example.PingPong").value
}
)
lazy val js =
project.in(file("js"))
test in Test <<= (startServer in Project("jvm", file("jvm")), test in(Test, fastOptStage)) {
(startServer, test) => startServer doFinally test
}
)
The sbt execution stops on a task startServer that spawns a thread, even a daemon one. I tried fork in startServer := true but it didn't help.
I also tried dependsOn, but it also blocks :
test in Test := {
(test in(Test, fastOptStage)).dependsOn(startServer in Project("jvm", file("jvm"))).value
}
If I do not start the server in the main class PingPong, it behaves as expected. Also if I do this, it works but it has a random order of execution and I don't know how to enforce it without doFinally.
test in Test := {
(startServer in Project("jvm", file("jvm"))).value
(test in(Test, fastOptStage)).value
}
I think I'm gonna have to try sbt-sequential or fork a new process.
Instead of dealing with SBT I can suggest much more simple workaround. Create Class with static method to start server (it will check that server is not yet started), and in each test explicitly start this server. You'll need to set "fork in Test := true", to run server in separate JVM and shutdown it after tests finished.
I use this approach for starting Embedded Cassandra server for integration tests
In my case it's static method with Java, you can do the same with Scala and companion object.
override def beforeAll() {
log.info(s"Start Embedded Cassandra Server")
EmbeddedCassandraServerHelper.startEmbeddedCassandra("/timeseries.yaml")
}
and
public static void startEmbeddedCassandra(String yamlFile, String tmpDir) throws TTransportException, IOException, ConfigurationException {
if (cassandraDaemon != null) {
/* nothing to do Cassandra is already started */
return;
}
// ... Start new server
}
For some reason none of these settings forks a new process :
fork := true
fork in runMain := true
fork in Test := true
fork in (Test, runMain) := true
That's why sequential dependsOn/doFinally blocks because the process is not forked...
I tried exactly the same with revolver and it works. I recommend revolver to everybody who is trying to fork a process in sbt, because one can't rely the sbt's fork at least from this experience.
lazy val startServer = taskKey[Unit]("Start PingPong Server before running JS tests")
lazy val jvm =
project.in(file("jvm"))
mainClass in Revolver.reStart := Option("com.example.PingPong"),
startServer := {
(Revolver.reStart in Test).toTask("").value
}
)
lazy val js =
project.in(file("js"))
test in Test := (test in(Test, fastOptStage)).dependsOn(startServer in Project("jvm", file("jvm"))).value
)

Play framework: Running separate module of multi-module application

I'm trying to create a multi-module application and run one of it's modules separately from the others (from another machine).
Project structure looks like this:
main
/ \
module1 module2
I want to run a module1 as a separate jar file (or there is a better way of doing this?), which I will run from another machine (I want to connect it to the main app using Akka remoting).
What I'm doing:
Running "play dist" command
Unzipping module1.zip from universal folder
Setting +x mode to bin/module1 executable.
Setting my main class (will paste it below): instead of play.core.server.NettyServer im putting my main class: declare -r app_mainclass="module1.foo.Launcher"
Running with external application.conf file.
Here is my main class:
class LauncherActor extends Actor {
def receive = {
case a => println(s"Received msg: $a ")
}
}
object Launcher extends App {
val system = ActorSystem("testsystem")
val listener = system.actorOf(Props[LauncherActor], name = "listener")
println(listener.path)
listener ! "hi!"
println("Server ready")
}
Here is the console output:
#pavel bin$ ./module1 -Dconfig.file=/Users/pavel/projects/foobar/conf/application.conf
[WARN] [10/18/2013 18:56:03.036] [main] [EventStream(akka://testsystem)] [akka.event-handlers] config is deprecated, use [akka.loggers]
akka://testsystem/user/listener
Server ready
Received msg: hi!
#pavel bin$
So the system switches off as soon as it gets to the last line of the main method. If I run this code without Play - it works as expected, the object is loaded and it waits for messages, which is expected behavior.
Maybe I'm doing something wrong? Or should I set some options in module1 executable? Other ideas?
Thanks in advance!
Update:
Versions:
Scala - 2.10.3
Play! - 2.2.0
SBT - 0.13.0
Akka - 2.2.1
Java 1.7 and 1.6 (tried both)
Build properties:
lazy val projectSettings = buildSettings ++ play.Project.playScalaSettings ++ Seq(resolvers := buildResolvers,
libraryDependencies ++= dependencies) ++ Seq(scalacOptions += "-language:postfixOps",
javaOptions in run ++= Seq(
"-XX:MaxPermSize=1024m",
"-Xmx4048m"
),
Keys.fork in run := true)
lazy val common = play.Project("common", buildVersion, dependencies, path = file("modules/common"))
lazy val root = play.Project(appName, buildVersion, settings = projectSettings).settings(
resolvers ++= buildResolvers
).dependsOn(common, module1, module2).aggregate(common, module1, module2)
lazy val module1 = play.Project("module1", buildVersion, path = file("modules/module1")).dependsOn(common).aggregate(common)
lazy val module2: Project = play.Project("module2", buildVersion, path = file("modules/module2")).dependsOn(common).aggregate(common)
So I found a dirty workaround and I will use it until I will find a better solution. In case someone is interested, I've added this code at the bottom of the Server object:
val shutdown = Future {
readLine("Press 'ENTER' key to shutdown")
}.map { q =>
println("**** Shutting down ****")
System.exit(0)
}
import scala.concurrent.duration._
Await.result(shutdown, 100 days)
And now system works until I will hit the ENTER key in the console. Dirty, I agree, but didn't find a better solution.
If there will be something better, of course I will mark it as an answer.

How to measure and display the running time of a single test?

I have a potentially long-running test written with scalatest:
test("a long running test") {
failAfter(Span(60, Seconds)) {
// ...
}
}
Even if the test finishes within the timeout limit, its running time can be valuable to the person who runs the test. How can I measure and display the running time of this single particular test in scalatest's output?
Update: Currently I measure the running time with my own function, just as in r.v's answer. I'd like to know if scalatest offers this functionality already.
the -oD option will give the duration of the test. For example, I use the following in my build.sbt.
testOptions in Test += Tests.Argument("-oD")
EDIT:
You can also use the following for individual runs:
> test-only org.acme.RedSuite -- -oD
See http://www.scalatest.org/user_guide/using_scalatest_with_sbt.
Moreover, you can define the following function for general time measurements:
def time[T](str: String)(thunk: => T): T = {
print(str + "... ")
val t1 = System.currentTimeMillis
val x = thunk
val t2 = System.currentTimeMillis
println((t2 - t1) + " msecs")
x
}
and use it anywhere (not dependent on ScalaTest)
test("a long running test") {
time("test running"){
failAfter(Span(60, Seconds)) {
// ...
}
}
In addition to r.v.'s answer:
If you have multiproject builds the testOptions in Test += Tests.Argument("-oD") does not work on root level in build.sbt, because Test refers to src/test/scala. You have to put it inside your Sub-Project settings
Project(id = "da_project", base = file("da_project"))
.settings(
testOptions in Test += Tests.Argument("-oDG"),
libraryDependencies ++= Seq(
...
)
)

Parallel execution of tests

I've noticed that SBT is running my specs2 tests in parallel. This seems good, except one of my tests involves reading and writing from a file and hence fails unpredictably, e.g. see below.
Are there any better options than
setting all tests to run in serial,
using separate file names and tear-downs for each test?
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
"WriteAndRead" should {
"work once" in {
new FileWriter(file, false).append("Foo").close
Source.fromFile(file).getLines().toList(0) must_== "Foo"
}
"work twice" in {
new FileWriter(file, false).append("Bar").close
Source.fromFile(file).getLines().toList(0) must_== "Bar"
}
}
trait TearDown extends After {
def after = if(file.exists) file.delete
}
}
In addition to that is written about sbt above, you must know that specs2 runs all the examples of your specifications concurrently by default.
You can still declare that, for a given specification, the examples must be executed sequentially. To do that, you simply add sequential to the beginning of your specification:
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
sequential
"WriteAndRead" should {
...
}
}
Fixed sequence of tests for suites can lead to interdependency of test cases and burden in maintenance.
I would prefer to test without touching the file system (no matter either it is business logic or serialization code), or if it is inevitable (as for testing integration with file feeds) then would use creating temporary files:
// Create temp file.
File temp = File.createTempFile("pattern", ".suffix");
// Delete temp file when program exits.
temp.deleteOnExit();
The wiki link Pablo Fernandez gave in his answer is pretty good, though there's a minor error in the example that might throw one off (though, being a wiki, I can and did correct it). Here's a project/Build.scala that actually compiles and produces the expected filters, though I didn't actually try it out with tests.
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( Serial )
.settings( inConfig(Serial)(Defaults.testTasks) : _*)
.settings(
libraryDependencies ++= specs,
testOptions in Test := Seq(Tests.Filter(parFilter)),
testOptions in Serial := Seq(Tests.Filter(serialFilter))
)
.settings( parallelExecution in Serial := false : _*)
def parFilter(name: String): Boolean = !(name startsWith "WriteAndReadSpec")
def serialFilter(name: String): Boolean = (name startsWith "WriteAndReadSpec")
lazy val Serial = config("serial") extend(Test)
lazy val specs = Seq(
"org.specs2" %% "specs2" % "1.6.1",
"org.specs2" %% "specs2-scalaz-core" % "6.0.1" % "test"
)
}
There seems to be a third option, which is grouping the serial tests in a configuration and running them separately while running the rest in parallel.
Check this wiki, look for "Application to parallel execution".
Other answers explained how to use make them run sequential.
While they're valid answers, in my opinion it's better to change your tests to let them run in parallel. (if possible)
In your example - use different files for each test.
If you have DB involved - use different (or random) users (or whatever isolation you can)
etc ...

sbt 0.11 run task examples needed

My projects are still using sbt 0.7.7 and I find it very convenient to have utility classes that I can run from the sbt prompt. I can also combine this with properties that are separately maintained - typically for environment related values that changes from hosts to hosts. This is an example of my project definition under the project/build directory:
class MyProject(info: ProjectInfo) extends DefaultProject(info) {
//...
lazy val extraProps = new BasicEnvironment {
// use the project's Logger for any properties-related logging
def log = MyProject.this.log
def envBackingPath = path("paths.properties")
// define some properties that will go in paths.properties
lazy val inputFile = property[String]
}
lazy val myTask = task { args =>
runTask(Some("foo.bar.MyTask"),
runClasspath, extraProps.inputFile.value :: args.toList).dependsOn(compile)
describedAs "my-task [options]"
}
}
I can then use my task as my-task option1 option2 under the sbt shell.
I've read the new sbt 0.11 documentation at https://github.com/harrah/xsbt/wiki including the sections on Tasks and TaskInputs and frankly I'm still struggling on how to accomplish what I did on 0.7.7.
It seems the extra properties could simply be replaced a separate environment.sbt, that tasks have to be defined in project/build.scala before being set in build.sbt. It also looks like there is completion support, which looks very interesting.
Beyond that I'm somewhat overwhelmed. How do I accomplish what I did with the new sbt?
You can define a task like this :
val myTask = InputKey[Unit]("my-task")
And your setting :
val inputFile = SettingKey[String]("input-file", "input file description")
You can also define a new configuration like :
lazy val ExtraProps = config("extra-props") extend(Compile)
add this config to your project and use it to set settings for this configuration :
lazy val root = Project("root", file(".")).config( ExtraProps ).settings(
inputFile in ExtraProps := ...
...
myTask in ExtraPops <<= inputTask { (argTask:TaskKey[Seq[String]]) =>
(argTask, inputFile) map { (args:Seq[String], iFile[String]) =>
...
}
}
).dependsOn(compile)
then launch your task with extra-props:my-task