I have a potentially long-running test written with scalatest:
test("a long running test") {
failAfter(Span(60, Seconds)) {
// ...
}
}
Even if the test finishes within the timeout limit, its running time can be valuable to the person who runs the test. How can I measure and display the running time of this single particular test in scalatest's output?
Update: Currently I measure the running time with my own function, just as in r.v's answer. I'd like to know if scalatest offers this functionality already.
the -oD option will give the duration of the test. For example, I use the following in my build.sbt.
testOptions in Test += Tests.Argument("-oD")
EDIT:
You can also use the following for individual runs:
> test-only org.acme.RedSuite -- -oD
See http://www.scalatest.org/user_guide/using_scalatest_with_sbt.
Moreover, you can define the following function for general time measurements:
def time[T](str: String)(thunk: => T): T = {
print(str + "... ")
val t1 = System.currentTimeMillis
val x = thunk
val t2 = System.currentTimeMillis
println((t2 - t1) + " msecs")
x
}
and use it anywhere (not dependent on ScalaTest)
test("a long running test") {
time("test running"){
failAfter(Span(60, Seconds)) {
// ...
}
}
In addition to r.v.'s answer:
If you have multiproject builds the testOptions in Test += Tests.Argument("-oD") does not work on root level in build.sbt, because Test refers to src/test/scala. You have to put it inside your Sub-Project settings
Project(id = "da_project", base = file("da_project"))
.settings(
testOptions in Test += Tests.Argument("-oDG"),
libraryDependencies ++= Seq(
...
)
)
Related
I'm somewhat new to Scala and ZIO and have run into something of an odd puzzle.
I would like to setup a ZIO Environment containing a ZIO Queue and later
have different ZIO Tasks offer and take from this shared Queue.
I tried defining my environment like this
trait MainEnv extends Console with Clock
{
val mainQueue = Queue.unbounded[String]
}
and accessing the queue from separate tasks like this
for {
env <- ZIO.environment[MainEnv]
queue <- env.mainQueue
...
but in my test I observe each of my separate tasks is given a separate Queue instance.
Looking at the signature for unbounded
def unbounded[A]: UIO[Queue[A]]
I observe it doesn't immediately return a Queue but rather returns an effect which returns
a queue so while the observed behavior makes sense it isn't at all what I was hoping for and
I don't see a clear way to get the behavior I would like.
Would appreciate any suggestions as to how to achieve my goal of setting up different
tasks communicating via a shared queue stored in the Environment.
For reference here is my code and output.
sample execution
bash-3.2$ sbt run
[info] Loading project definition from /private/tmp/env-q/project
[info] Loading settings for project root from build.sbt ...
[info] Set current project to example (in build file:/private/tmp/env-q/)
[info] Compiling 1 Scala source to /private/tmp/env-q/target/scala-2.12/classes ...
[info] Done compiling.
[info] Packaging /private/tmp/env-q/target/scala-2.12/example_2.12-0.0.1-SNAPSHOT.jar ...
[info] Done packaging.
[info] Running example.Main
env example.Main$$anon$1#36fbcafd queue zio.Queue$$anon$1#65b9a444
env example.Main$$anon$1#36fbcafd queue zio.Queue$$anon$1#7c050764
(hangs here - notice env object is the same but queue objects are different so second task is stuck)
/tmp/env-q/test.scala
Here is my complete test which is based on example from slide 37 of https://www.slideshare.net/jdegoes/zio-queue
package example
import zio.{App, Queue, ZIO}
import zio.blocking.Blocking
import zio.clock.Clock
import zio.console._
trait MainEnv extends Console with Clock // environment with queue
{
val mainQueue = Queue.unbounded[String]
}
object Main extends App // main test
{
val task1 = for { // task to add something to the queue
env <- ZIO.environment[MainEnv]
queue <- env.mainQueue
_ <- putStrLn(s"env $env queue $queue")
_ <- queue.offer("Give me Coffee!")
} yield ()
val task2 = for { // task to remove+print stuff from queue
env <- ZIO.environment[MainEnv]
queue <- env.mainQueue
_ <- putStrLn(s"env $env queue $queue")
_ <- queue.take.flatMap(putStrLn)
} yield ()
val program = ZIO.runtime[MainEnv] // top level to run both tasks
.flatMap {
implicit rts =>
for {
_ <- task1.fork
_ <- task2
} yield ()
}
val runEnv = new MainEnv with Console.Live with Clock.Live
def run(args: List[String]) =
program.provide(runEnv).fold(_ => 1, _ => 0)
}
/tmp/env-q/build.sbt
Here is the build.sbt I used
val ZioVersion = "1.0.0-RC13"
lazy val root = (project in file("."))
.settings(
organization := "example",
name := "example",
version := "0.0.1-SNAPSHOT",
scalaVersion := "2.12.8",
scalacOptions ++= Seq("-Ypartial-unification"),
libraryDependencies ++= Seq(
"dev.zio" %% "zio" % ZioVersion,
),
addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.6"),
addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.2.4")
)
scalacOptions ++= Seq(
"-deprecation", // Emit warning and location for usages of deprecated APIs.
"-encoding", "UTF-8", // Specify character encoding used by source files.
"-language:higherKinds", // Allow higher-kinded types
"-language:postfixOps", // Allows operator syntax in postfix position (deprecated since Scala 2.10)
"-feature", // Emit warning and location for usages of features that should be imported explicitly.
"-Ypartial-unification", // Enable partial unification in type constructor inference
"-Xfatal-warnings", // Fail the compilation if there are any warnings
)
In the Official Gitter Channel for ZIO Core, Adam Fraser suggested
You would want to have you environment just have a Queue[String] and then you would want to use a method like provideM with Queue.unbounded to create one queue and provide it to your whole application. That's where provideM as opposed to provide comes in. It let's you satisfy an environment that requires an A by providing a ZIO[A].
A little digging into the ZIO source revealed a helpful example in DefaultTestReporterSpec.scala.
By defining the Environment as
trait MainEnv extends Console with Clock // environment with queue
{
val mainQueue: Queue[String]
}
changing my tasks to access env.mainQueue with = instead of <- (because mainQueue is a Queue[String] now and not a UIO[Queue[String]], removing runEnv and changing the run method in my test to use provideSomeM
def run(args: List[String]) =
program.provideSomeM(
for {
q <- Queue.unbounded[String]
} yield new MainEnv with Console.Live with Clock.Live {
override val mainQueue = q
}
).fold(_ => 1, _ => 0)
I was able to get the intended result:
sbt:example> run
[info] Running example.Main
env example.Main$$anon$1#45bfc0da queue zio.Queue$$anon$1#13b73d56
env example.Main$$anon$1#45bfc0da queue zio.Queue$$anon$1#13b73d56
Give me Coffee!
[success] Total time: 1 s, completed Oct 1, 2019 7:41:47 AM
How would one override the libraryDependencies ?
I tried:
Keys.libraryDependencies in Compile := {
val libraryDependencies = (Keys.libraryDependencies in Compile).value
val allLibraries = UpdateDependencies(libraryDependencies)
allLibraries
}
So that seem to work, when I add print statement, the allLibraries is correct.
However, in the next steps, it doesn't seem to have the right values:
Keys.update in Compile := Def.taskDyn {
val u = (Keys.update in Compile).value
Def.task {
val allModules= u.configurations.flatMap(_.allModules)
log.info(s"Read ${allModules.size} modules:")
u
}
}.value
The print statement only have a few modules instead of all the one I would have added in the previous step.
Anyone have a solution ? Thanks !
So I understand where my problem was.
I was not understanding correctly how settings and tasks were working together.
settings are only evaluated once when sbt start.
and tasks are only evaluated once when sbt start a task / command which will require it.
So you cannot read and then rewrite settings like that.
It was so convoluted, I even wrote a whole article about it
I've just written my first SBT Autoplugin which has a custom task that generates a settings file (if the file is not already present). Everything works as expected when the task is explicitly invoked, but I'd like to have it automatically invoked prior to compilation of the project using the plugin (without having the project modify it's build.sbt file). Is there a way of accomplishing this, or do I somehow need to override the compile command? If so, could anyone point me to examples of doing so? Any help would be extremely appreciated! (My apologies if I'm missing something simple!) Thanks!
You can define dependencies between tasks with dependsOn and override the behavior of a scoped task (like compile in Compile) by reassigning it.
The following lines added to a build.sbt file could serve as an example:
lazy val hello = taskKey[Unit]("says hello to everybody :)")
hello := { println("hello, world") }
(compile in Compile) := ((compile in Compile) dependsOn hello).value
Now, every time you run compile, hello, world will be printed:
[IJ]sbt:foo> compile
hello, world
[success] Total time: 0 s, completed May 18, 2018 6:53:05 PM
This example has been tested with SBT 1.1.5 and Scala 2.12.6.
val runSomeShTask = TaskKey[Unit]("runSomeSh", " run some sh")
lazy val startrunSomeShTask = TaskKey[Unit]("runSomeSh", " run some sh")
startrunSomeShTask := {
val s: TaskStreams = streams.value
val shell: Seq[String] = if (sys.props("os.name").contains("Windows")) Seq("cmd", "/c") else Seq("bash", "-c")
// watch out for those STDOUT , SDERR redirection, otherwise this one will hang after sbt test ...
val startMinioSh: Seq[String] = shell :+ " ./src/sh/some-script.sh"
s.log.info("set up run some sh...")
if (Process(startMinioSh.mkString(" ")).! == 0) {
s.log.success("run some sh setup successful!")
} else {
throw new IllegalStateException("run some sh setup failed!")
}
}
// or only in sbt test
// test := (test in Test dependsOn startrunSomeShTask).value
(compile in Compile) := ((compile in Compile) dependsOn startrunSomeShTask).value
In the snippet of my Build.scala file below, the itTestWithService task starts a test server before and after running the integration tests.
I'd like to attach this itTestWithService task to the it:test key. But how?
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
// I'd like the following but it creates a cycle that fails at runtime:
// test in IntegrationTest <<= testWithService
itTestWithService <<= testWithService
)
val itTestWithService = taskKey[Unit]("run integration test with background server")
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithService = Def.task {
val log = streams.value.log
val launched = (start in testService).value
launched match {
case Success(_) =>
testAndStop.value
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
log.error(s"failed to start test server: $e \n ${stack}")
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
val _ = (test in IntegrationTest).value
stop in testService
}
In a related github issue discussion, Josh suggested an approach to the particular case I asked about (overriding it:test with a task that calls the original test task).
The approach works by reimplementing the test task. I don't know if there's a more general way to get access to the original version of the task. (A more general way would be a better answer than this one!)
Here's how to reimplement the it:test task:
/** run integration tests (just like it:test does, but explicitly so we can overwrite the it:test key */
lazy val itTestTask: Initialize[Task[Unit]] = Def.taskDyn {
for {
results <- (executeTests in IntegrationTest)
} yield { Tests.showResults(streams.value.log, results, "test missing?") }
}
Here's the composite integration test task (evolved slighly from the original question, though that version should work too):
/** run integration tests against a test server. (the server is started before the tests and stopped after the tests) */
lazy val testWithServiceTask = Def.taskDyn {
(start in testService).value match {
case Success(_) =>
testAndStop
case Failure(e) =>
val stack = e.getStackTrace().mkString("\n")
streams.value.log.error(s"failed to start test server: $e \n ${stack}")
emptyTask
}
}
/** run integration tests and then stop the test server */
lazy val testAndStop = Def.taskDyn {
SbtUtil.sequence(itTestTask, stop in testService)
}
val emptyTask = Def.task {}
And now plugging the composite task we've built into the it:test key doesn't create a cycle:
lazy val mohs =
Project(id = "mohs", base = file("."))
.settings (
test in IntegrationTest <<= testWithServiceTask,
)
You can add custom test tags in build.scala. Here's the sample code from one of my projects. Keep in mind, you don't have to bind it to it:test. You could name it anything you want.
lazy val AcceptanceTest = config("acc") extend(Test)
lazy val Kernel = Project(
id = "kernel",
base = file("."),
settings = defaultSettings ++ AkkaKernelPlugin.distSettings ++ Seq(
libraryDependencies ++= Dependencies.Kernel,
distJvmOptions in Dist := "-Xms256M -Xmx2048M",
outputDirectory in Dist := file("target/dist"),
distMainClass in Dist := "akka.kernel.Main system.SystemKernel"
)
).configs(AcceptanceTest)
.settings(inConfig(AcceptanceTest)(Defaults.testTasks): _*)
.settings(testOptions in AcceptanceTest := Seq(Tests.Argument("-n",
"AcceptanceTest"), Tests.Argument("-oD")))
Just pay attention to the lazy val at the top and the .configs part.
With that setup, when I type acc:test, it runs all tests with the AcceptanceTestTag. And you can just boot the server as part of what gets called by your test suite. Can even tag tests by needing server and not needing it so you can separate them once your suite gets big and takes longer to run.
Edit: Added to respond to comments.
To Tag tests, create a tag like this
import org.scalatest.Tag
object AcceptanceTest extends Tag("AcceptanceTest")
then put it here in your tests...
it("should allow any actor to subscribe to any channel", AcceptanceTest) {
And that cooresponds to the same build setup above. Only tests with That tag will be run when I call acc:test.
What came to mind for your problem was the solution I use for that same situation. Right now, you're doing the work in build.scala. I'm not sure if it's possible to do what you're saying right there... but what I do does the same thing but a bit differently. I have a trait that I mix-in to all tests that need a vagrant server. And I tag the tests that use it as VagrantTest.
And it works like a singleton. If one or more tests need it, it boots. But it will only boot one and all tests use it.
You could try doing the same, but over-ride it:test in the config file. Instead of "acc" in the example above, put "it". If that's not quite what you're looking for, may have to see if anyone else pops in.
So, basically when I call it:test it accesses running all tests with IntegrationTest test tag on it (as well as those with VagrantTest and a few others). So all the longer-running server tests don't get run as much (takes too long).
I've noticed that SBT is running my specs2 tests in parallel. This seems good, except one of my tests involves reading and writing from a file and hence fails unpredictably, e.g. see below.
Are there any better options than
setting all tests to run in serial,
using separate file names and tear-downs for each test?
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
"WriteAndRead" should {
"work once" in {
new FileWriter(file, false).append("Foo").close
Source.fromFile(file).getLines().toList(0) must_== "Foo"
}
"work twice" in {
new FileWriter(file, false).append("Bar").close
Source.fromFile(file).getLines().toList(0) must_== "Bar"
}
}
trait TearDown extends After {
def after = if(file.exists) file.delete
}
}
In addition to that is written about sbt above, you must know that specs2 runs all the examples of your specifications concurrently by default.
You can still declare that, for a given specification, the examples must be executed sequentially. To do that, you simply add sequential to the beginning of your specification:
class WriteAndReadSpec extends Specification{
val file = new File("testFiles/tmp.txt")
sequential
"WriteAndRead" should {
...
}
}
Fixed sequence of tests for suites can lead to interdependency of test cases and burden in maintenance.
I would prefer to test without touching the file system (no matter either it is business logic or serialization code), or if it is inevitable (as for testing integration with file feeds) then would use creating temporary files:
// Create temp file.
File temp = File.createTempFile("pattern", ".suffix");
// Delete temp file when program exits.
temp.deleteOnExit();
The wiki link Pablo Fernandez gave in his answer is pretty good, though there's a minor error in the example that might throw one off (though, being a wiki, I can and did correct it). Here's a project/Build.scala that actually compiles and produces the expected filters, though I didn't actually try it out with tests.
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( Serial )
.settings( inConfig(Serial)(Defaults.testTasks) : _*)
.settings(
libraryDependencies ++= specs,
testOptions in Test := Seq(Tests.Filter(parFilter)),
testOptions in Serial := Seq(Tests.Filter(serialFilter))
)
.settings( parallelExecution in Serial := false : _*)
def parFilter(name: String): Boolean = !(name startsWith "WriteAndReadSpec")
def serialFilter(name: String): Boolean = (name startsWith "WriteAndReadSpec")
lazy val Serial = config("serial") extend(Test)
lazy val specs = Seq(
"org.specs2" %% "specs2" % "1.6.1",
"org.specs2" %% "specs2-scalaz-core" % "6.0.1" % "test"
)
}
There seems to be a third option, which is grouping the serial tests in a configuration and running them separately while running the rest in parallel.
Check this wiki, look for "Application to parallel execution".
Other answers explained how to use make them run sequential.
While they're valid answers, in my opinion it's better to change your tests to let them run in parallel. (if possible)
In your example - use different files for each test.
If you have DB involved - use different (or random) users (or whatever isolation you can)
etc ...