My team is using Scala, IntelliJ, and Maven.
In the some of the tests I need for the current module I am working on the internal test order of execution is important.
For some reason running the same test using JUnit or ScalaTest in IntelliJ effects the order of execution.
For example, the following test:
package com.liveperson.lpbt.hadoop.monitoring.temp
import org.junit.runner.RunWith
import org.scalatest.junit.JUnitRunner
import org.scalatest.{FunSuite, BeforeAndAfterAll}
#RunWith(classOf[JUnitRunner])
class TempTest extends FunSuite with BeforeAndAfterAll{
println("start TempTest")
override def beforeAll(){
println("beforeAll")
}
override def afterAll(){
println("afterAll")
}
test("test code"){
println("in test code")
}
println("end TempTest")
}
When running with JUnit the above code prints outs:
start TempTest
end TempTest
in test code
beforeAll
afterAll
When running with ScalaTest the above code prints outs:
start TempTest
end TempTest
beforeAll
in test code
afterAll
Does somebody knows how to write such test ensuring the order of execution in both ScalaTests and JUint?
Also, how do you switch between them in IntelliJ?
Can you elaborate on exactly how you got the JUnit output? I just tried doing your class from the command line and got the expected output (different from yours):
Mi-Novia:roofSoccer bv$ scala -cp ~/.m2/repository/junit/junit/4.8.1/junit-4.8.1.jar:target/jar_contents/ org.scalatest.run TempTest
start TempTest
end TempTest
Discovery starting.
Discovery completed in 19 milliseconds.
Run starting. Expected test count is: 1
beforeAll
TempTest:
in test code
- test code
afterAll
Run completed in 140 milliseconds.
Total number of tests run: 1
Suites: completed 1, aborted 0
Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
Mi-Novia:roofSoccer bv$ scala -cp ~/.m2/repository/junit/junit/4.8.1/junit-4.8.1.jar:target/jar_contents/ org.junit.runner.JUnitCore TempTest
JUnit version 4.8.1
Could not find class: TempTest
Time: 0.001
OK (0 tests)
Mi-Novia:roofSoccer bv$ scala -cp ~/.m2/repository/junit/junit/4.8.1/junit-4.8.1.jar:target/jar_contents/:. org.junit.runner.JUnitCore TempTest
JUnit version 4.8.1
start TempTest
end TempTest
beforeAll
.in test code
afterAll
Time: 0.078
OK (1 test)
Related
When building chisel using sbt, when run as a batch process, how do I turn off the progress bars etc. so that the output is clean like I get with most compilers?
That is, I like to build chisel using sbt from within a makefile, like this:
${VLOG_DIR}/${NAME}.v: ${NAME}.scala
setsid sbt \
'runMain ${NAME}.${NAME} --top-name ${NAME} --target-dir ${VLOG_DIR}'
However sbt/scala/chisel like to attempt to generate some sort of progress bars while building that emit terminal escape codes to attempt to update the output in place. When run inside my shell this does not work very well, but when run inside emacs it makes a huge mess:
make
setsid sbt \
'runMain HelloWorld.HelloWorld --top-name HelloWorld --target-dir Gen_HelloWor\
ld.verilog_dir'
^[[0m[^[[0m^[[0minfo^[[0m] ^[[0m^[[0mLoading project definition from /home/user/h\
ello-chisel/project^[[0m
^[[2K
^[[2K
^[[2K
^[[2K
^[[2K
^[[5A^[[2K
^[[2K
^[[2K
^[[2K
^[[2K
^[[2K | => hello-chisel-build / update 0s
^[[6A^[[2K
^[[2K
Especially when there is an error message:
^[[2K
^[[2K
^[[2K | => hello-chisel / Compile / compileIncremental 1s
^[[6A^[[2K^[[0m[^[[0m^[[31merror^[[0m] ^[[0m^[[0m/home/user/hello-chisel/HelloWor\
ld.scala:46:17: overloaded method value apply with alternatives:^[[0m
^[[2K
How do I turn all of that off and get normal looking output having clean error messages?
Short answer
Pass -no-colors to sbt on the command line, per the
sbt FAQ:
$ sbt -no-colors 'test:runMain gcd.GCDMain'
Despite the name "no colors", it suppresses all terminal escape stuff, including the progress bars.
Other alternatives
The option can also be spelled --no-colors (two hyphens), which is how it appears in the output of sbt --help (and sbt -help).
The same effect can be achieved by passing -Dsbt.log.noformat=true to sbt (or to java if invoking that directly) as indicated in this answer:
$ sbt -Dsbt.log.noformat=true 'test:runMain gcd.GCDMain'
It can also be achieved by setting the JAVA_OPTS environment variable to -Dsbt.log.noformat=true:
$ JAVA_OPTS=-Dsbt.log.noformat=true sbt 'test:runMain gcd.GCDMain'
If you are using sbt-launch.jar, then you have to use the -D switch because -no-colors is not recognized in that context (the sbt shell script is what recognizes -no-colors):
$ java -jar ~/opt/sbt-1.3.4/bin/sbt-launch.jar -Dsbt.log.noformat=true 'test:runMain gcd.GCDMain'
Finally, when sbt detects that its stdout is not a TTY, it will suppress color output:
$ sbt 'test:runMain gcd.GCDMain' | cat
That's not a good option in a Makefile though because you lose the exit status of sbt (without further shenanigans).
Unfortunately, sbt does not respect NO_COLOR.
Chisel output has some colors anyway
When using Chisel, for example the chisel template, even with -no-colors, some terminal escape codes appear in the output anyway:
$ sbt -no-colors 'test:runMain gcd.GCDMain' | cat -tev
[info] Loading settings for project chisel-template-build from plugins.sbt ...$
[info] Loading project definition from /home/scott/wrk/learn/chisel/chisel-template/project$
[info] Loading settings for project chisel-template from build.sbt ...$
[info] Set current project to chisel-module-template (in build file:/home/scott/wrk/learn/chisel/chisel-template/)$
[warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list$
[info] running gcd.GCDMain $
[^[[35minfo^[[0m] [0.001] Elaborating design...$
[^[[35minfo^[[0m] [1.064] Done elaborating.$
Total FIRRTL Compile Time: 512.0 ms$
file loaded in 0.114380184 seconds, 22 symbols, 17 statements$
[^[[35minfo^[[0m] [0.001] SEED 1581769687441$
test GCD Success: 168 tests passed in 1107 cycles in 0.047873 seconds 23123.91 Hz$
[^[[35minfo^[[0m] [0.037] RAN 1102 CYCLES PASSED$
[success] Total time: 3 s, completed Feb 15, 2020 4:28:09 AM$
Note the [^[[35minfo^[[0m] output near the end. This happens because chiselFrontend/src/main/scala/chisel3/internal/Error.scala unconditionally prints color escape sequences (see the tag function), which is arguably a bug in Chisel since its output is clearly meant to look similar to sbt output.
Effect of setsid
In your example Makefile, you're invoking sbt via setsid. As far as I can tell, everything I've said applies equally to that circumstance. However, you probably want to pass --wait to setsid so it will wait for sbt to complete before exiting. In my testing, setsid will implicitly wait only when stdout is not a TTY, but I doubt you actually want that hidden variability.
My test suits execute fine with jvm, but if I enable scalajs plugin in sbt and run sbt test I get the following output:
[info] Tests:
[info] - should finishTest1
[info] - should finishTest2
...
[info] - should finishTest100
[info] Run completed in 4 seconds, 580 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[success] Total time: 4 s, completed Oct 29, 2019 1:27:24 PM
So it seems that the test might get executed, but the their results get ignored? I am not really sure where should I start in order to fix such problem.
The test file looks like this:
import org.scalatest._
class Tests extends FlatSpec with Matchers {
it should "finishTest1" in {
assert(... == ...)
}
...
}
Furthemore, although sbt test returns the above output, running the tests trough intellij results in java.lang.NoClassDefFoundError: org/scalatest/tools/Runner!
I use NUnit and TeamCity to run my tests.
Some tests (not all) has actions made in test class constructor. I call these actions "pre-actioins" for validations. So in one test class I have for example 5 validations (tests) and a set of pre-actions.
I noticed that if a suite of tests if failed on the stage of pre-actions executing then TeamCity doesn't display these tests in its report at all (not under any status).
In build log I see error like:
SetUp Error : {test_name} + error code.
What I expect from TeamCity is to report these tests at least as Ignored.
To compare running tests using TeamCity with running tests using Visual Studio in Visual Studio the result of the same failure condition will be failure for all the test suite. Failure error will be the same for all the tests.
So what I want is just to know if some of my tests were not run at all because if TeamCity doesn't include then in test results then I don't even know about problems!
Configs: TeamCity 10.0, NUnit 3.0.
Command line params: --result=TestResult.xml --workers=4 --teamcity
Update: results of tests executing in log looks like:
[13:03:48][Step 1/1] Test Run Summary
[13:03:48][Step 1/1] Overall result: Failed
[13:03:48][Step 1/1] Tests run: 82, Passed: 0, Errors: 82, Failures: 0, Inconclusive: 0
[13:03:48][Step 1/1] Not run: 0, Invalid: 0, Ignored: 0, Explicit: 0, Skipped: 0
[13:03:48][Step 1/1] Start time: 2016-09-08 09:56:33Z
[13:03:48][Step 1/1] End time: 2016-09-08 10:03:48Z
[13:03:48][Step 1/1] Duration: 434,948 seconds
So NUnit marks such tests even not as failed but as "erros". Still I want them in test results.
Your tests are errors because you are throwing an exception in the constructor. Since the test fixture can't be constructed, the test is not really being run as far as NUnit is concerned. The fact that it's an NUnit assertion failure causing the exception is irrelevant in the context of constructing the object.
We have always advised people to keep their constructors very simple because NUnit makes no guarantees about when and how often your object will be constructed. Using assertions in the constructor is an extreme violation of that principal and, in fact, I've never seen anyone do it before.
The OneTimeSetUp attribute is there if you want some thing to happen every time your test is run as opposed to constructed. NUnit does make guarantees about when that method will be executed. :-)
None of this tells me for sure why TC is not recognizing the error but I'm guessing it's because once the constructor fails, the tests are never actually run. NUnit itself compensates for that by reporting the tests as errors but TC would not necessarily do the same.
I have this in my build.gradle:
test {
testLogging {
exceptionFormat 'full'
showExceptions true
showStackTraces true
}
}
This works with java ("plain" junit) tests, but when I running scalatest tests, even with -i on command line, all I get in case of a failure is something like this:
com.mypackage.mytests.ScalatestSpec > .apply should fail miserably FAILED
org.scalatest.exceptions.TestFailedException: 2 was not equal to 1
No traceback or even a line number is printed out, and I have to rerun the test manually to be able to see where it actually failed.
Is there another special flag I have to set to get it to stop following my output?
We have a Scala project and we use SBT as its build tool.
our CI tool is TeamCity, and we build the project using the command line custom script option with the following command:
call %system.SBT_HOME%\bin\sbt clean package
The build process works fine when the build succeeds, however, when compilation fails - TeamCity thinks that the script exited with exitCode 0 and not 1 as expected, this cause TeamCity build to succeed although the compilation failed.
when we run the same commands on local cmd we see that the errorLevel is 1.
the relevant part of the build log:
[11:33:44][Step 1/3] [error] trait ConfigurationDomain JsonSupport extends CommonFormats {
[11:33:44][Step 1/3] [error] ^
[11:33:44][Step 1/3] [error] one error found
[11:33:45][Step 1/3] [error] (compile:compile) Compilation failed
[11:33:45][Step 1/3] [error] Total time: 12 s, completed Jan 9, 2014 11:33:45 AM
[11:33:45][Step 1/3] Process exited with code 0
how can we make TeamCity recognize the failure of the build?
Try explicitly exit with:
call %system.SBT_HOME%\bin\sbt clean package
echo the exit code is %errorlevel%
exit /b
If you can't get the process to output a non-zero exit code then you could use a build failure condition based on specific text in the build log. See this page for the documentation but in essence you can get the build to fail if it finds the text error found in the build log.