When developing tests, I often get 422 errors. However, I can't see the exception thrown on the server in the pytest output. Without the exception details, it makes tracking down the error fairly difficult. Is there some way I can get these errors displayed in the pytest output?
Related
Background:
We have a regression test suite that tests the generation of some large xml files by comparing them field-by-field to the corresponding baseline files.
This is implemented using a junit4 parameterized test running the test for each file and assertj soft assertions to collect the field comparison errors.
Problem:
When I run this test from my IDE, I can see the assertion errors output after each test (each file), but when run from maven, surefire collects all the errors in memory and outputs them at the end (when all the tests for the class have finished). Now, running this for 2000+ files, comparing hundreds of fields in each and having a lot of differences results in OutOfMemoryError, even with 8GB of heap allocated.
Question:
I'm trying to find out if there's any option in surefire to either output the errors after each test or not collect & output them at all (we're logging them into a file and generating custom reports from there anyway).
I've tried <redirectTestOutputToFile>true</redirectTestOutputToFile> but this only redirects stdout (the logs produced during test execution), the assertion errors are still spit to console after the tests finish.
Other options I see are:
Split the test parameters into smaller batches and run the test suite
for each batch, then aggregate the results - this could be done in
the Jenkins job.
Remove the detailed error reporting using soft assertions and only have a single assertion at the end of the test. This is what we had
before and obviously didn't help in finding errors. I wouldn't like
to go back there.
Add an option to run the tests in two modes:
use soft assertions to provide detailed error reporting when run locally (with a smaller set of data)
use a single assertion when run on Jenkins (with the full set of data) - here we're only interested in the logs, not the console output
The third solution would result in some ifs in the code and would make the code less readable, that's why I'm trying to solve this from configuration first.
Any other ideas? Thanx!
yesterday when I tried to run my code on eclipse i received an error 404 on GWT, today i tried to run my code, then I received error 503,i did not change my code even a single line. what is the meaning of this?
When running SBT test from the terminal, after a while tests hang without throwing an error.
How do you allocate more memory to the test runner?(if needed)
Is there a log file anywhere? Can't find anything in the docs.
The Eclipse integration of ScalaTest is rubish, are there better alternatives?
last:test is useless since no error is actually thrown.
If your test is hanging up, then my first suspicion is that you may have an infinite tail recursion somewhere. I doubt that (1) will help you since typically you will get an OutOfMemoryError or some other error if you don't have enough. (2) The only logging I'm aware of is printed to the console. (3) You're already doing the right thing by using the console.
Disable parallel execution of your tests to allow you to determine which test is getting stuck and if it is consistently the same test, then go from there. Add this to your build.sbt:
parallelExecution in Test := false
I've been looking in to using Py.Test to automate unit testing in some code I've been working on. I've discovered the following behavior: when a test that I've built has an error (that would otherwise cause the interpreter to barf), the testing framework seems to silently ignore the test altogether.
I'm worried that, as I implement more tests, I'll mistake "this test had an error and didn't run" for "this test passed". Ideally, I'd like to hit a button in Eclipse and have a unit test fail if it has a syntax error in it. Other than "Why don't you write code without syntax errors in it?", is there another solution I'm missing?
Alternatively, is there a way to make Py.Test tell you what test files were found, and which ones were run?
Setup is PyDev 2.7.1 and Eclipse 4.2, with Python 2.7.3 and PyTest 2.3.4.
I think the issue has to do with one of the command line options I set in Preferenced -> PyDev -> PyUnit. I had been running with -n 4, which splits the tests up over processors. This seems to have suppressed the syntax errors. The same option also made debugging not work (i.e., breakpoints were skipped) which seems pretty obvious in hindsight.
I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.