When running SBT test from the terminal, after a while tests hang without throwing an error.
How do you allocate more memory to the test runner?(if needed)
Is there a log file anywhere? Can't find anything in the docs.
The Eclipse integration of ScalaTest is rubish, are there better alternatives?
last:test is useless since no error is actually thrown.
If your test is hanging up, then my first suspicion is that you may have an infinite tail recursion somewhere. I doubt that (1) will help you since typically you will get an OutOfMemoryError or some other error if you don't have enough. (2) The only logging I'm aware of is printed to the console. (3) You're already doing the right thing by using the console.
Disable parallel execution of your tests to allow you to determine which test is getting stuck and if it is consistently the same test, then go from there. Add this to your build.sbt:
parallelExecution in Test := false
Related
I've just started using the tests functionality in the Python extension. I want to debug my test however when I hit the debug button in the tests extension the debugger runs but doesn't stop on a bug and display a red box with the error - it just reports a failed test in the debug console. This means I can't examine the variables that caused the error.
I've tried just running the normal debugger (not from within the tests section) on the test file but the same thing happens. I've tried the normal debugger on a non-test file and it works fine.
Is the test debugger supposed to work in the same way as the normal debugger, i.e. stop on a bug?
Edit:
Worth me mentioning that the bug is occurring in the function I'm testing rather than the test file itself
Edit_2:
I've tested break points and they seem to be working ok. Can't get a conditional one to work though
Background:
We have a regression test suite that tests the generation of some large xml files by comparing them field-by-field to the corresponding baseline files.
This is implemented using a junit4 parameterized test running the test for each file and assertj soft assertions to collect the field comparison errors.
Problem:
When I run this test from my IDE, I can see the assertion errors output after each test (each file), but when run from maven, surefire collects all the errors in memory and outputs them at the end (when all the tests for the class have finished). Now, running this for 2000+ files, comparing hundreds of fields in each and having a lot of differences results in OutOfMemoryError, even with 8GB of heap allocated.
Question:
I'm trying to find out if there's any option in surefire to either output the errors after each test or not collect & output them at all (we're logging them into a file and generating custom reports from there anyway).
I've tried <redirectTestOutputToFile>true</redirectTestOutputToFile> but this only redirects stdout (the logs produced during test execution), the assertion errors are still spit to console after the tests finish.
Other options I see are:
Split the test parameters into smaller batches and run the test suite
for each batch, then aggregate the results - this could be done in
the Jenkins job.
Remove the detailed error reporting using soft assertions and only have a single assertion at the end of the test. This is what we had
before and obviously didn't help in finding errors. I wouldn't like
to go back there.
Add an option to run the tests in two modes:
use soft assertions to provide detailed error reporting when run locally (with a smaller set of data)
use a single assertion when run on Jenkins (with the full set of data) - here we're only interested in the logs, not the console output
The third solution would result in some ifs in the code and would make the code less readable, that's why I'm trying to solve this from configuration first.
Any other ideas? Thanx!
With the junit-interface runner, there was this handy option:
-q Suppress stdout for successful tests. Stderr is printed to the
console
normally. Stdout is written to a buffer and discarded when a test
succeeds. If it fails, the buffer is dumped to the console. Since stdio
redirection in Java is a bad kludge (System.setOut() changes the static
final field System.out through native code) this may not work for all
scenarios. Scala has its own console with a sane redirection feature. If
Scala is detected on the class path, junit-interface tries to reroute
scala.Console's stdout, too.
I was wondering whether there was an easy way to make scalatest do the same thing. I can try to redirect the output myself but would prefer to use something standard if possible.
According to this info on the ScalaTest website, you can set some flags to suppress some messages:
If you add a line like this in your build.sbt, your logger will not show successful tests in the stdout:
testOptions in Test += Tests.Argument(TestFrameworks.ScalaTest, "-oN")
Actually that is a question that was asked on github, at the scalatest repository. You can find it here.
As suggeted there, you can add -oNCXEHLOPQRM to the running options. It means that in your sbt you need to add:
testOptions in Test += Tests.Argument("-oNCXEHLOPQRM")
That alone will might be sufficient, as sbt itself logs its own startup, and that all tests passed. If you want to get ONLY the failed tests, you can add the -error flag to the sbt command, and then it will skip the info log level.
In such case, when all tests passes the output will be completely empty.
An example to such a command will be:
sbt test -error
If you want to further read about the scala test configuration report, you can do it here. They have plenty of options.
I've been looking in to using Py.Test to automate unit testing in some code I've been working on. I've discovered the following behavior: when a test that I've built has an error (that would otherwise cause the interpreter to barf), the testing framework seems to silently ignore the test altogether.
I'm worried that, as I implement more tests, I'll mistake "this test had an error and didn't run" for "this test passed". Ideally, I'd like to hit a button in Eclipse and have a unit test fail if it has a syntax error in it. Other than "Why don't you write code without syntax errors in it?", is there another solution I'm missing?
Alternatively, is there a way to make Py.Test tell you what test files were found, and which ones were run?
Setup is PyDev 2.7.1 and Eclipse 4.2, with Python 2.7.3 and PyTest 2.3.4.
I think the issue has to do with one of the command line options I set in Preferenced -> PyDev -> PyUnit. I had been running with -n 4, which splits the tests up over processors. This seems to have suppressed the syntax errors. The same option also made debugging not work (i.e., breakpoints were skipped) which seems pretty obvious in hindsight.
I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.