Maven surefire plugin outputs all errors at the end - causing OOME on large tests - junit4

Background:
We have a regression test suite that tests the generation of some large xml files by comparing them field-by-field to the corresponding baseline files.
This is implemented using a junit4 parameterized test running the test for each file and assertj soft assertions to collect the field comparison errors.
Problem:
When I run this test from my IDE, I can see the assertion errors output after each test (each file), but when run from maven, surefire collects all the errors in memory and outputs them at the end (when all the tests for the class have finished). Now, running this for 2000+ files, comparing hundreds of fields in each and having a lot of differences results in OutOfMemoryError, even with 8GB of heap allocated.
Question:
I'm trying to find out if there's any option in surefire to either output the errors after each test or not collect & output them at all (we're logging them into a file and generating custom reports from there anyway).
I've tried <redirectTestOutputToFile>true</redirectTestOutputToFile> but this only redirects stdout (the logs produced during test execution), the assertion errors are still spit to console after the tests finish.
Other options I see are:
Split the test parameters into smaller batches and run the test suite
for each batch, then aggregate the results - this could be done in
the Jenkins job.
Remove the detailed error reporting using soft assertions and only have a single assertion at the end of the test. This is what we had
before and obviously didn't help in finding errors. I wouldn't like
to go back there.
Add an option to run the tests in two modes:
use soft assertions to provide detailed error reporting when run locally (with a smaller set of data)
use a single assertion when run on Jenkins (with the full set of data) - here we're only interested in the logs, not the console output
The third solution would result in some ifs in the code and would make the code less readable, that's why I'm trying to solve this from configuration first.
Any other ideas? Thanx!

Related

Store results of unit test run into variables

I have a TeamCity build configuration that builds a C# project, runs some unit tests, and then does some extra things. My question is: Can I get information about my unit test run stored into build configuration variables (i.e. how many tests were run, how many were successful, how many failed, how many were skipped) so that I can then check these variables in a PowerShell script in later build steps and perform different actions depending on how many tests have passed?
AFAIK the best way is to ask these information directly to teamcity server using its REST API (pay attention, maybe the build locator could be a little be tricly to be found, if the build is still running).
By other hand, you can parse your NUnit test result file (or files if you run more than one NUnit test runner step in your build) inside your build agent machine.

Storing/ Exporting Selenium Webdriver Test Results

Im using Eclipse - TestNG / Webdriver, and doing regression testing my application and getting the test results, so is there any way that i can store the test results(i.e, whaterver the printed output), can i export in any format like XL file or HTML whenever i run the tests?
If your tests executed successfully, than you can see output in location:: project-directory/test-output
Refer to these files:
index.html => Full TestNg HTML reports
emailable-report.html => Reports that can be easily printed and emailed
testng-results.xml => TestNg results in xml format
If you don't mind, why do you want to store them?
If they pass, test results should be ignoreable.
And if they don't pass, don't you need to fix the one that failed? (and therefore just run the test that failed manually while looking through it?).

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

Continue running NUnit after failures

I am running nunit-console from a CI configured in TeamCity to run tests from various assemblies. Once one of the TestFixtures has a failing test, then the test execution will stop.
Currently i am able to see the first tests that failed, but am unaware if there are more testfixtures that might fail down the line.
I would like to get a summary that lists the failing tests and test fixtures, without all the details of the exceptions thrown.
Anyone have any ideas?
Thanks.
NUnit should run all of the unit tests in the specified assembly, regardless of the number of test failures. The first thing I would check is the raw xml output from the unit test run. You may find that the tests are being executed, but the build server is failing to display all of the results. If that is the case, there may be a faulty xslt that needs to be modified.
Another thing to try is running all of the tests on your box using the command-line tool, and see if it runs all of the tests. If they run on your box but not the server, you may have a configuration problem on the build box.
Yet another possibility is that the failure is a critical one (failure to load an assembly perhaps) which is causing NUnit itself to error out.

How to report the progress when NUnit tests crashes on a CruiseControl.NET server?

Nunit works quite well with CruiseControl.NET, but there is one thing that irritates me a lot.
If there is a test that causes Nunit to crash, I would only get little information about the crash because the XML report of Nunit doesn't get a chance to be created and be merged into the CruiseControl report.
I need a way to report the progress even when Nunit crashes during the execution.
I have been tried to force each test to output some information to the console to resolve this problem. I have thought about using SetUp method, but I haven't found any good way to get the name of the current running test.
I think a better answer would be to create an NUnit Add-in that implements EventListener interface to capture the TestStarted event to output the progress to the console or a file.
The EventListener interface is documented on NUnit website: http://nunit.org/index.php?p=eventListeners&r=2.5
In addition, we can make the Dashboard report better even when NUnit crashes during its execution. We can use the following procedure to ensure that the DashBoard always shows something about the tests.
Run tests with the EventListener which outputs the progress to a separate file
After running tests, use another program to check the file
If the file does not contain a specific "end line", generate a special XML report based on the file and merge it into the CruiseControl log
If getting the name of the current running test is what you're after you could grab it with the following:
using System.Diagnostics;
...
[Test]
public void SomeTestThatWillCrash()
{
StackFrame sf = new StackFrame();
Console.WriteLine("Now running method: " + sf.GetMethod().Name);
...
}
CruiseControl.net recommends that you use NUnit through your builder (i.e. NAnt/MSBuild). See here: http://confluence.public.thoughtworks.org/display/CCNET/NUnit+Task. As they describe - it will allow you to run these tests locally first - which should give you an exception that you can clear up.
That being said - are your developers running these unit tests prior to checking in code? That could ease this issue. If its an integration issue - I would suggest grabbing the latest code base and running the tests locally to see what is out of sorts.
I don't know if NUnit is able to create the results file even when it crashes. Even if it did - you could run into problems if that file is not well formed due to the crash.
You could use #jpoh's approach but do it in the TestSetup method which would require you do it per-fixture. If really needed, you could write a base class that all your test fixtures inherit from that implement this method.
Another solution is to use MSBuild to run NUnit and use the task in the MSBuildCommunityTasks library. This allows you to continue on error and also get the error code back from NUnit. You won't get what method caused the problem, but might help some. Here is my MSBuild target:
<Target Name="UnitTest"
DependsOnTargets="BuildIt">
<NUnit Assemblies="#(TestAssemblies)"
ToolPath="$(NUnitx86Path)"
WorkingDirectory="%(TestAssemblies.RootDir)%(TestAssemblies.Directory)"
OutputXmlFile="#(TestAssemblies->'%(FullPath).$(NUnitFile)')"
Condition="'#(TestAssemblies)' != ''"
ExcludeCategory="$(ExcludeNUnitCategories)"
ContinueOnError="true">
<Output TaskParameter="ExitCode" ItemName="NUnitExitCodes"/>
</NUnit>
<!-- Copy the test results for the CCNet build before a possible build failure (see next step) -->
<CallTarget Targets="CopyTestResults" Condition="'#(TestAssemblies)' != ''"/>
<Error Text="Test error(s) occured" Code="%(NUnitExitCodes.Identity)" Condition=" '%(NUnitExitCodes.Identity)' != '0' And '#(TestAssemblies)' != ''"/>
</Target>
This probably won't fit your needs as is, but is something to try out and play with.
That said, I would agree with #rifferte that it sounds like you need to debug the problem locally and not rely on CC.NET to handle the reporting.